|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1 |
|
| #
01157ddc |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
kallsyms: Remove KALLSYMS_ABSOLUTE_PERCPU
x86-64 was the only user.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <ardb@ke
kallsyms: Remove KALLSYMS_ABSOLUTE_PERCPU
x86-64 was the only user.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3 |
|
| #
fb6a421f |
| 07-Aug-2024 |
Song Liu <[email protected]> |
kallsyms: Match symbols exactly with CONFIG_LTO_CLANG
With CONFIG_LTO_CLANG=y, the compiler may add .llvm.<hash> suffix to function names to avoid duplication. APIs like kallsyms_lookup_name() and k
kallsyms: Match symbols exactly with CONFIG_LTO_CLANG
With CONFIG_LTO_CLANG=y, the compiler may add .llvm.<hash> suffix to function names to avoid duplication. APIs like kallsyms_lookup_name() and kallsyms_on_each_match_symbol() tries to match these symbol names without the .llvm.<hash> suffix, e.g., match "c_stop" with symbol c_stop.llvm.17132674095431275852. This turned out to be problematic for use cases that require exact match, for example, livepatch.
Fix this by making the APIs to match symbols exactly.
Also cleanup kallsyms_selftests accordingly.
Signed-off-by: Song Liu <[email protected]> Fixes: 8cc32a9bbf29 ("kallsyms: strip LTO-only suffixes from promoted global functions") Tested-by: Masami Hiramatsu (Google) <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Acked-by: Petr Mladek <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6 |
|
| #
64e16609 |
| 21-Feb-2024 |
Jann Horn <[email protected]> |
kallsyms: get rid of code for absolute kallsyms
Commit cf8e8658100d ("arch: Remove Itanium (IA-64) architecture") removed the last use of the absolute kallsyms.
Signed-off-by: Jann Horn <jannh@goog
kallsyms: get rid of code for absolute kallsyms
Commit cf8e8658100d ("arch: Remove Itanium (IA-64) architecture") removed the last use of the absolute kallsyms.
Signed-off-by: Jann Horn <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/all/[email protected]/ [[email protected]: rebase the code and reword the commit description] Signed-off-by: Masahiro Yamada <[email protected]>
show more ...
|
| #
7e1f4eb9 |
| 04-Apr-2024 |
Arnd Bergmann <[email protected]> |
kallsyms: rework symbol lookup return codes
Building with W=1 in some configurations produces a false positive warning for kallsyms:
kernel/kallsyms.c: In function '__sprint_symbol.isra': kernel/ka
kallsyms: rework symbol lookup return codes
Building with W=1 in some configurations produces a false positive warning for kallsyms:
kernel/kallsyms.c: In function '__sprint_symbol.isra': kernel/kallsyms.c:503:17: error: 'strcpy' source argument is the same as destination [-Werror=restrict] 503 | strcpy(buffer, name); | ^~~~~~~~~~~~~~~~~~~~
This originally showed up while building with -O3, but later started happening in other configurations as well, depending on inlining decisions. The underlying issue is that the local 'name' variable is always initialized to the be the same as 'buffer' in the called functions that fill the buffer, which gcc notices while inlining, though it could see that the address check always skips the copy.
The calling conventions here are rather unusual, as all of the internal lookup functions (bpf_address_lookup, ftrace_mod_address_lookup, ftrace_func_address_lookup, module_address_lookup and kallsyms_lookup_buildid) already use the provided buffer and either return the address of that buffer to indicate success, or NULL for failure, but the callers are written to also expect an arbitrary other buffer to be returned.
Rework the calling conventions to return the length of the filled buffer instead of its address, which is simpler and easier to follow as well as avoiding the warning. Leave only the kallsyms_lookup() calling conventions unchanged, since that is called from 16 different functions and adapting this would be a much bigger change.
Link: https://lore.kernel.org/lkml/[email protected]/ Link: https://lore.kernel.org/lkml/[email protected]/ Tested-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Acked-by: Steven Rostedt (Google) <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
| #
951bcae6 |
| 15-Apr-2024 |
Ard Biesheuvel <[email protected]> |
kallsyms: Avoid weak references for kallsyms symbols
kallsyms is a directory of all the symbols in the vmlinux binary, and so creating it is somewhat of a chicken-and-egg problem, as its non-zero si
kallsyms: Avoid weak references for kallsyms symbols
kallsyms is a directory of all the symbols in the vmlinux binary, and so creating it is somewhat of a chicken-and-egg problem, as its non-zero size affects the layout of the binary, and therefore the values of the symbols.
For this reason, the kernel is linked more than once, and the first pass does not include any kallsyms data at all. For the linker to accept this, the symbol declarations describing the kallsyms metadata are emitted as having weak linkage, so they can remain unsatisfied. During the subsequent passes, the weak references are satisfied by the kallsyms metadata that was constructed based on information gathered from the preceding passes.
Weak references lead to somewhat worse codegen, because taking their address may need to produce NULL (if the reference was unsatisfied), and this is not usually supported by RIP or PC relative symbol references.
Given that these references are ultimately always satisfied in the final link, let's drop the weak annotation, and instead, provide fallback definitions in the linker script that are only emitted if an unsatisfied reference exists.
While at it, drop the FRV specific annotation that these symbols reside in .rodata - FRV is long gone.
Tested-by: Nick Desaulniers <[email protected]> # Boot Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Link: https://lkml.kernel.org/r/20230504174320.3930345-1-ardb%40kernel.org Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5 |
|
| #
76903a96 |
| 25-Aug-2023 |
Yonghong Song <[email protected]> |
kallsyms: Change func signature for cleanup_symbol_name()
All users of cleanup_symbol_name() do not use the return value. So let us change the return value of cleanup_symbol_name() to 'void' to refl
kallsyms: Change func signature for cleanup_symbol_name()
All users of cleanup_symbol_name() do not use the return value. So let us change the return value of cleanup_symbol_name() to 'void' to reflect its usage pattern.
Suggested-by: Nick Desaulniers <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Song Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
| #
33f0467f |
| 25-Aug-2023 |
Yonghong Song <[email protected]> |
kallsyms: Fix kallsyms_selftest failure
Kernel test robot reported a kallsyms_test failure when clang lto is enabled (thin or full) and CONFIG_KALLSYMS_SELFTEST is also enabled. I can reproduce in m
kallsyms: Fix kallsyms_selftest failure
Kernel test robot reported a kallsyms_test failure when clang lto is enabled (thin or full) and CONFIG_KALLSYMS_SELFTEST is also enabled. I can reproduce in my local environment with the following error message with thin lto: [ 1.877897] kallsyms_selftest: Test for 1750th symbol failed: (tsc_cs_mark_unstable) addr=ffffffff81038090 [ 1.877901] kallsyms_selftest: abort
It appears that commit 8cc32a9bbf29 ("kallsyms: strip LTO-only suffixes from promoted global functions") caused the failure. Commit 8cc32a9bbf29 changed cleanup_symbol_name() based on ".llvm." instead of '.' where ".llvm." is appended to a before-lto-optimization local symbol name. We need to propagate such knowledge in kallsyms_selftest.c as well.
Further more, compare_symbol_name() in kallsyms.c needs change as well. In scripts/kallsyms.c, kallsyms_names and kallsyms_seqs_of_names are used to record symbol names themselves and index to symbol names respectively. For example: kallsyms_names: ... __amd_smn_rw._entry <== seq 1000 __amd_smn_rw._entry.5 <== seq 1001 __amd_smn_rw.llvm.<hash> <== seq 1002 ...
kallsyms_seqs_of_names are sorted based on cleanup_symbol_name() through, so the order in kallsyms_seqs_of_names actually has
index 1000: seq 1002 <== __amd_smn_rw.llvm.<hash> (actual symbol comparison using '__amd_smn_rw') index 1001: seq 1000 <== __amd_smn_rw._entry index 1002: seq 1001 <== __amd_smn_rw._entry.5
Let us say at a particular point, at index 1000, symbol '__amd_smn_rw.llvm.<hash>' is comparing to '__amd_smn_rw._entry' where '__amd_smn_rw._entry' is the one to search e.g., with function kallsyms_on_each_match_symbol(). The current implementation will find out '__amd_smn_rw._entry' is less than '__amd_smn_rw.llvm.<hash>' and then continue to search e.g., index 999 and never found a match although the actual index 1001 is a match.
To fix this issue, let us do cleanup_symbol_name() first and then do comparison. In the above case, comparing '__amd_smn_rw' vs '__amd_smn_rw._entry' and '__amd_smn_rw._entry' being greater than '__amd_smn_rw', the next comparison will be > index 1000 and eventually index 1001 will be hit an a match is found.
For any symbols not having '.llvm.' substr, there is no functionality change for compare_symbol_name().
Fixes: 8cc32a9bbf29 ("kallsyms: strip LTO-only suffixes from promoted global functions") Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-lkp/[email protected] Signed-off-by: Yonghong Song <[email protected]> Reviewed-by: Song Liu <[email protected]> Reviewed-by: Zhen Lei <[email protected]> Link: https://lore.kernel.org/r/[email protected] Cc: [email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1 |
|
| #
8cc32a9b |
| 28-Jun-2023 |
Yonghong Song <[email protected]> |
kallsyms: strip LTO-only suffixes from promoted global functions
Commit 6eb4bd92c1ce ("kallsyms: strip LTO suffixes from static functions") stripped all function/variable suffixes started with '.' r
kallsyms: strip LTO-only suffixes from promoted global functions
Commit 6eb4bd92c1ce ("kallsyms: strip LTO suffixes from static functions") stripped all function/variable suffixes started with '.' regardless of whether those suffixes are generated at LTO mode or not. In fact, as far as I know, in LTO mode, when a static function/variable is promoted to the global scope, '.llvm.<...>' suffix is added.
The existing mechanism breaks live patch for a LTO kernel even if no <symbol>.llvm.<...> symbols are involved. For example, for the following kernel symbols: $ grep bpf_verifier_vlog /proc/kallsyms ffffffff81549f60 t bpf_verifier_vlog ffffffff8268b430 d bpf_verifier_vlog._entry ffffffff8282a958 d bpf_verifier_vlog._entry_ptr ffffffff82e12a1f d bpf_verifier_vlog.__already_done 'bpf_verifier_vlog' is a static function. '_entry', '_entry_ptr' and '__already_done' are static variables used inside 'bpf_verifier_vlog', so llvm promotes them to file-level static with prefix 'bpf_verifier_vlog.'. Note that the func-level to file-level static function promotion also happens without LTO.
Given a symbol name 'bpf_verifier_vlog', with LTO kernel, current mechanism will return 4 symbols to live patch subsystem which current live patching subsystem cannot handle it. With non-LTO kernel, only one symbol is returned.
In [1], we have a lengthy discussion, the suggestion is to separate two cases: (1). new symbols with suffix which are generated regardless of whether LTO is enabled or not, and (2). new symbols with suffix generated only when LTO is enabled.
The cleanup_symbol_name() should only remove suffixes for case (2). Case (1) should not be changed so it can work uniformly with or without LTO.
This patch removed LTO-only suffix '.llvm.<...>' so live patching and tracing should work the same way for non-LTO kernel. The cleanup_symbol_name() in scripts/kallsyms.c is also changed to have the same filtering pattern so both kernel and kallsyms tool have the same expectation on the order of symbols.
[1] https://lore.kernel.org/live-patching/[email protected]/T/#u
Fixes: 6eb4bd92c1ce ("kallsyms: strip LTO suffixes from static functions") Reported-by: Song Liu <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Reviewed-by: Zhen Lei <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
|
Revision tags: v6.4, v6.4-rc7 |
|
| #
33457938 |
| 14-Jun-2023 |
Azeem Shaikh <[email protected]> |
kallsyms: Replace all non-returning strlcpy with strscpy
strlcpy() reads the entire source buffer first. This read may exceed the destination size limit. This is both inefficient and can lead to lin
kallsyms: Replace all non-returning strlcpy with strscpy
strlcpy() reads the entire source buffer first. This read may exceed the destination size limit. This is both inefficient and can lead to linear read overflows if a source string is not NUL-terminated [1]. In an effort to remove strlcpy() completely [2], replace strlcpy() here with strscpy(). No return values were used, so direct replacement is safe.
[1] https://www.kernel.org/doc/html/latest/process/deprecated.html#strlcpy [2] https://github.com/KSPP/linux/issues/89
Signed-off-by: Azeem Shaikh <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc6 |
|
| #
b06e9318 |
| 08-Jun-2023 |
Maninder Singh <[email protected]> |
kallsyms: move kallsyms_show_value() out of kallsyms.c
function kallsyms_show_value() is used by other parts like modules_open(), kprobes_read() etc. which can work in case of !KALLSYMS also.
e.g.
kallsyms: move kallsyms_show_value() out of kallsyms.c
function kallsyms_show_value() is used by other parts like modules_open(), kprobes_read() etc. which can work in case of !KALLSYMS also.
e.g. as of now lsmod do not show module address if KALLSYMS is disabled. since kallsyms_show_value() defination is not present, it returns false in !KALLSYMS.
/ # lsmod test 12288 0 - Live 0x0000000000000000 (O)
So kallsyms_show_value() can be made generic without dependency on KALLSYMS.
Thus moving out function to a new file ksyms_common.c.
With this patch code is just moved to new file and no functional change.
Co-developed-by: Onkarnath <[email protected]> Signed-off-by: Onkarnath <[email protected]> Signed-off-by: Maninder Singh <[email protected]> Reviewed-by: Zhen Lei <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc5, v6.4-rc4 |
|
| #
4f521bab |
| 26-May-2023 |
Maninder Singh <[email protected]> |
kallsyms: remove unsed API lookup_symbol_attrs
with commit '7878c231dae0 ("slab: remove /proc/slab_allocators")' lookup_symbol_attrs usage is removed.
Thus removing redundant API.
Signed-off-by: M
kallsyms: remove unsed API lookup_symbol_attrs
with commit '7878c231dae0 ("slab: remove /proc/slab_allocators")' lookup_symbol_attrs usage is removed.
Thus removing redundant API.
Signed-off-by: Maninder Singh <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc3 |
|
| #
15d5daa0 |
| 17-May-2023 |
Arnd Bergmann <[email protected]> |
kallsyms: remove unused arch_get_kallsym() helper
The arch_get_kallsym() function was introduced so that x86 could override it, but that override was removed in bf904d2762ee ("x86/pti/64: Remove the
kallsyms: remove unused arch_get_kallsym() helper
The arch_get_kallsym() function was introduced so that x86 could override it, but that override was removed in bf904d2762ee ("x86/pti/64: Remove the SYSCALL64 entry trampoline"), so now this does nothing except causing a warning about a missing prototype:
kernel/kallsyms.c:662:12: error: no previous prototype for 'arch_get_kallsym' [-Werror=missing-prototypes] 662 | int __weak arch_get_kallsym(unsigned int symnum, unsigned long *value,
Restore the old behavior before d83212d5dd67 ("kallsyms, x86: Export addresses of PTI entry trampolines") to simplify the code and avoid the warning.
Signed-off-by: Arnd Bergmann <[email protected]> Tested-by: Alan Maguire <[email protected]> [mcgrof: fold in bpf selftest fix] Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2 |
|
| #
3703bd54 |
| 08-Mar-2023 |
Zhen Lei <[email protected]> |
kallsyms: Delete an unused parameter related to {module_}kallsyms_on_each_symbol()
The parameter 'struct module *' in the hook function associated with {module_}kallsyms_on_each_symbol() is no longe
kallsyms: Delete an unused parameter related to {module_}kallsyms_on_each_symbol()
The parameter 'struct module *' in the hook function associated with {module_}kallsyms_on_each_symbol() is no longer used. Delete it.
Suggested-by: Petr Mladek <[email protected]> Signed-off-by: Zhen Lei <[email protected]> Reviewed-by: Vincenzo Palazzo <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Steven Rostedt (Google) <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6 |
|
| #
30f3bb09 |
| 15-Nov-2022 |
Zhen Lei <[email protected]> |
kallsyms: Add self-test facility
Added test cases for basic functions and performance of functions kallsyms_lookup_name(), kallsyms_on_each_symbol() and kallsyms_on_each_match_symbol(). It also calc
kallsyms: Add self-test facility
Added test cases for basic functions and performance of functions kallsyms_lookup_name(), kallsyms_on_each_symbol() and kallsyms_on_each_match_symbol(). It also calculates the compression rate of the kallsyms compression algorithm for the current symbol set.
The basic functions test begins by testing a set of symbols whose address values are known. Then, traverse all symbol addresses and find the corresponding symbol name based on the address. It's impossible to determine whether these addresses are correct, but we can use the above three functions along with the addresses to test each other. Due to the traversal operation of kallsyms_on_each_symbol() is too slow, only 60 symbols can be tested in one second, so let it test on average once every 128 symbols. The other two functions validate all symbols.
If the basic functions test is passed, print only performance test results. If the test fails, print error information, but do not perform subsequent performance tests.
Start self-test automatically after system startup if CONFIG_KALLSYMS_SELFTEST=y.
Example of output content: (prefix 'kallsyms_selftest:' is omitted start --------------------------------------------------------- | nr_symbols | compressed size | original size | ratio(%) | |---------------------------------------------------------| | 107543 | 1357912 | 2407433 | 56.40 | --------------------------------------------------------- kallsyms_lookup_name() looked up 107543 symbols The time spent on each symbol is (ns): min=630, max=35295, avg=7353 kallsyms_on_each_symbol() traverse all: 11782628 ns kallsyms_on_each_match_symbol() traverse all: 9261 ns finish
Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc5, v6.1-rc4 |
|
| #
4dc533e0 |
| 02-Nov-2022 |
Zhen Lei <[email protected]> |
kallsyms: Add helper kallsyms_on_each_match_symbol()
Function kallsyms_on_each_symbol() traverses all symbols and submits each symbol to the hook 'fn' for judgment and processing. For some cases, th
kallsyms: Add helper kallsyms_on_each_match_symbol()
Function kallsyms_on_each_symbol() traverses all symbols and submits each symbol to the hook 'fn' for judgment and processing. For some cases, the hook actually only handles the matched symbol, such as livepatch.
Because all symbols are currently sorted by name, all the symbols with the same name are clustered together. Function kallsyms_lookup_names() gets the start and end positions of the set corresponding to the specified name. So we can easily and quickly traverse all the matches.
The test results are as follows (twice): (x86) kallsyms_on_each_match_symbol: 7454, 7984 kallsyms_on_each_symbol : 11733809, 11785803
kallsyms_on_each_match_symbol() consumes only 0.066% of kallsyms_on_each_symbol()'s time. In other words, 1523x better performance.
Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
| #
19bd8981 |
| 02-Nov-2022 |
Zhen Lei <[email protected]> |
kallsyms: Reduce the memory occupied by kallsyms_seqs_of_names[]
kallsyms_seqs_of_names[] records the symbol index sorted by address, the maximum value in kallsyms_seqs_of_names[] is the number of s
kallsyms: Reduce the memory occupied by kallsyms_seqs_of_names[]
kallsyms_seqs_of_names[] records the symbol index sorted by address, the maximum value in kallsyms_seqs_of_names[] is the number of symbols. And 2^24 = 16777216, which means that three bytes are enough to store the index. This can help us save (1 * kallsyms_num_syms) bytes of memory.
Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
| #
60443c88 |
| 02-Nov-2022 |
Zhen Lei <[email protected]> |
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in 'kallsyms_names' one by one, and then use the expanded string for comp
kallsyms: Improve the performance of kallsyms_lookup_name()
Currently, to search for a symbol, we need to expand the symbols in 'kallsyms_names' one by one, and then use the expanded string for comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table kallsyms_names[] is still stored in a one-to-one correspondence with the address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number of the sorted names, and the corresponding content is the sequence number of the sorted addresses. For example: Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i', the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[] is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms) bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86) Before: min=234, max=10364402, avg=5206926 min=267, max=11168517, avg=5207587 After: min=1016, max=90894, avg=7272 min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc3 |
|
| #
5ebddd7c |
| 28-Oct-2022 |
Peter Zijlstra <[email protected]> |
kallsyms: Revert "Take callthunks into account"
This is a full revert of commit:
f1389181622a ("kallsyms: Take callthunks into account")
The commit assumes a number of things that are not quite
kallsyms: Revert "Take callthunks into account"
This is a full revert of commit:
f1389181622a ("kallsyms: Take callthunks into account")
The commit assumes a number of things that are not quite right. Notably it assumes every symbol has PADDING_BYTES in front of it that are not claimed by another symbol.
This is not true; even when compiled with:
-fpatchable-function-entry=${PADDING_BYTES},${PADDING_BYTES}
Notably things like .cold subfunctions do not need to adhere to this change in ABI. It it also not true when build with CFI_CLANG, which claims these PADDING_BYTES in the __cfi_##name symbol.
Once the prefix bytes are not consistent and or otherwise claimed the approach this patch takes goes out the window and kallsym resolution will report invalid symbol names.
Therefore revert this to make room for another approach.
Reported-by: Reported-by: kernel test robot <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Yujie Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6 |
|
| #
f1389181 |
| 15-Sep-2022 |
Peter Zijlstra <[email protected]> |
kallsyms: Take callthunks into account
Since the pre-symbol function padding is an integral part of the symbol make kallsyms report it as part of the symbol by reporting it as sym-x instead of prev_
kallsyms: Take callthunks into account
Since the pre-symbol function padding is an integral part of the symbol make kallsyms report it as part of the symbol by reporting it as sym-x instead of prev_sym+y.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7 |
|
| #
73bbb944 |
| 05-Apr-2021 |
Miguel Ojeda <[email protected]> |
kallsyms: support "big" kernel symbols
Rust symbols can become quite long due to namespacing introduced by modules, types, traits, generics, etc.
Increasing to 255 is not enough in some cases, ther
kallsyms: support "big" kernel symbols
Rust symbols can become quite long due to namespacing introduced by modules, types, traits, generics, etc.
Increasing to 255 is not enough in some cases, therefore introduce longer lengths to the symbol table.
In order to avoid increasing all lengths to 2 bytes (since most of them are small, including many Rust ones), use ULEB128 to keep smaller symbols in 1 byte, with the rest in 2 bytes.
Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Co-developed-by: Alex Gaynor <[email protected]> Signed-off-by: Alex Gaynor <[email protected]> Co-developed-by: Wedson Almeida Filho <[email protected]> Signed-off-by: Wedson Almeida Filho <[email protected]> Co-developed-by: Gary Guo <[email protected]> Signed-off-by: Gary Guo <[email protected]> Co-developed-by: Boqun Feng <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Co-developed-by: Matthew Wilcox <[email protected]> Signed-off-by: Matthew Wilcox <[email protected]> Signed-off-by: Miguel Ojeda <[email protected]>
show more ...
|
| #
dfb352ab |
| 08-Sep-2022 |
Sami Tolvanen <[email protected]> |
kallsyms: Drop CONFIG_CFI_CLANG workarounds
With -fsanitize=kcfi, the compiler no longer renames static functions with CONFIG_CFI_CLANG + ThinLTO. Drop the code that cleans up the ThinLTO hash from
kallsyms: Drop CONFIG_CFI_CLANG workarounds
With -fsanitize=kcfi, the compiler no longer renames static functions with CONFIG_CFI_CLANG + ThinLTO. Drop the code that cleans up the ThinLTO hash from the function names.
Signed-off-by: Sami Tolvanen <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Kees Cook <[email protected]> Tested-by: Kees Cook <[email protected]> Tested-by: Nathan Chancellor <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
71f8c155 |
| 17-May-2022 |
Stephen Brennan <[email protected]> |
kallsyms: move declarations to internal header
Patch series "Expose kallsyms data in vmcoreinfo note".
The kernel can be configured to contain a lot of introspection or debugging information built-
kallsyms: move declarations to internal header
Patch series "Expose kallsyms data in vmcoreinfo note".
The kernel can be configured to contain a lot of introspection or debugging information built-in, such as ORC for unwinding stack traces, BTF for type information, and of course kallsyms. Debuggers could use this information to navigate a core dump or live system, but they need to be able to find it.
This patch series adds the necessary symbols into vmcoreinfo, which would allow a debugger to find and interpret the kallsyms table. Using the kallsyms data, the debugger can then lookup any symbol, allowing it to find ORC, BTF, or any other useful data.
This would allow a live kernel, or core dump, to be debugged without any DWARF debuginfo. This is useful for many cases: the debuginfo may not have been generated, or you may not want to deploy the large files everywhere you need them.
I've demonstrated a proof of concept for this at LSF/MM+BPF during a lighting talk. Using a work-in-progress branch of the drgn debugger, and an extended set of BTF generated by a patched version of dwarves, I've been able to open a core dump without any DWARF info and do basic tasks such as enumerating slab caches, block devices, tasks, and doing backtraces. I hope this series can be a first step toward a new possibility of "DWARFless debugging".
Related discussion around the BTF side of this: https://lore.kernel.org/bpf/[email protected]/T/#u
Some work-in-progress branches using this feature: https://github.com/brenns10/dwarves/tree/remove_percpu_restriction_1 https://github.com/brenns10/drgn/tree/kallsyms_plus_btf
This patch (of 2):
To include kallsyms data in the vmcoreinfo note, we must make the symbol declarations visible outside of kallsyms.c. Move these to a new internal header file.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Stephen Brennan <[email protected]> Acked-by: Baoquan He <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Dave Young <[email protected]> Cc: Kees Cook <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Bixuan Cui <[email protected]> Cc: David Vernet <[email protected]> Cc: Vivek Goyal <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
647cafa2 |
| 12-Jul-2022 |
Alan Maguire <[email protected]> |
bpf: add a ksym BPF iterator
add a "ksym" iterator which provides access to a "struct kallsym_iter" for each symbol. Intent is to support more flexible symbol parsing as discussed in [1].
[1] http
bpf: add a ksym BPF iterator
add a "ksym" iterator which provides access to a "struct kallsym_iter" for each symbol. Intent is to support more flexible symbol parsing as discussed in [1].
[1] https://lore.kernel.org/all/YjRPZj6Z8vuLeEZo@krava/
Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Alan Maguire <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
bed0d9a5 |
| 10-May-2022 |
Jiri Olsa <[email protected]> |
ftrace: Add ftrace_lookup_symbols function
Adding ftrace_lookup_symbols function that resolves array of symbols with single pass over kallsyms.
The user provides array of string pointers with count
ftrace: Add ftrace_lookup_symbols function
Adding ftrace_lookup_symbols function that resolves array of symbols with single pass over kallsyms.
The user provides array of string pointers with count and pointer to allocated array for resolved values.
int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *addrs)
It iterates all kallsyms symbols and tries to loop up each in provided symbols array with bsearch. The symbols array needs to be sorted by name for this reason.
We also check each symbol to pass ftrace_location, because this API will be used for fprobe symbols resolving.
Suggested-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
d721def7 |
| 10-May-2022 |
Jiri Olsa <[email protected]> |
kallsyms: Make kallsyms_on_each_symbol generally available
Making kallsyms_on_each_symbol generally available, so it can be used outside CONFIG_LIVEPATCH option in following changes.
Rather than ad
kallsyms: Make kallsyms_on_each_symbol generally available
Making kallsyms_on_each_symbol generally available, so it can be used outside CONFIG_LIVEPATCH option in following changes.
Rather than adding another ifdef option let's make the function generally available (when CONFIG_KALLSYMS option is defined).
Cc: Christoph Hellwig <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|