|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7 |
|
| #
14daa3bc |
| 08-Jan-2025 |
Sebastian Andrzej Siewior <[email protected]> |
x86: Use RCU in all users of __module_address().
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled.
Replace the preempt_disable() section a
x86: Use RCU in all users of __module_address().
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled.
Replace the preempt_disable() section around __module_address() with RCU.
Cc: H. Peter Anvin <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Petr Pavlu <[email protected]>
show more ...
|
| #
ab9fea59 |
| 07-Feb-2025 |
Peter Zijlstra <[email protected]> |
x86/alternative: Simplify callthunk patching
Now that paravirt call patching is implemented using alternatives, it is possible to avoid having to patch the alternative sites by including the altinst
x86/alternative: Simplify callthunk patching
Now that paravirt call patching is implemented using alternatives, it is possible to avoid having to patch the alternative sites by including the altinstr_replacement calls in the call_sites list.
This means we're now stacking relative adjustments like so:
callthunks_patch_builtin_calls(): patches all function calls to target: func() -> func()-10 since the CALL accounting lives in the CALL_PADDING.
This explicitly includes .altinstr_replacement
alt_replace_call(): patches: x86_BUG() -> target()
this patching is done in a relative manner, and will preserve the above adjustment, meaning that with calldepth patching it will do: x86_BUG()-10 -> target()-10
apply_relocation(): does code relocation, and adjusts all RIP-relative instructions to the new location, also in a relative manner.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4 |
|
| #
7fa0da53 |
| 17-Oct-2024 |
Juergen Gross <[email protected]> |
x86/xen: remove hypercall page
The hypercall page is no longer needed. It can be removed, as from the Xen perspective it is optional.
But, from Linux's perspective, it removes naked RET instruction
x86/xen: remove hypercall page
The hypercall page is no longer needed. It can be removed, as from the Xen perspective it is optional.
But, from Linux's perspective, it removes naked RET instructions that escape the speculative protections that Call Depth Tracking and/or Untrain Ret are trying to achieve.
This is part of XSA-466 / CVE-2024-53241.
Reported-by: Andrew Cooper <[email protected]> Signed-off-by: Juergen Gross <[email protected]> Reviewed-by: Andrew Cooper <[email protected]> Reviewed-by: Jan Beulich <[email protected]>
show more ...
|
| #
cb33ff9e |
| 05-Dec-2024 |
David Woodhouse <[email protected]> |
x86/kexec: Move relocate_kernel to kernel .data section
Now that the copy is executed instead of the original, the relocate_kernel page can live in the kernel's .text section. This will allow subseq
x86/kexec: Move relocate_kernel to kernel .data section
Now that the copy is executed instead of the original, the relocate_kernel page can live in the kernel's .text section. This will allow subsequent commits to actually add real data to it and clean up the code somewhat as well as making the control page ROX.
Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Baoquan He <[email protected]> Cc: Vivek Goyal <[email protected]> Cc: Dave Young <[email protected]> Cc: Eric Biederman <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3 |
|
| #
f796c758 |
| 30-Jan-2024 |
Borislav Petkov (AMD) <[email protected]> |
x86/alternatives: Use a temporary buffer when optimizing NOPs
Instead of optimizing NOPs in-place, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to gr
x86/alternatives: Use a temporary buffer when optimizing NOPs
Instead of optimizing NOPs in-place, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to grab locks when patching, see
6778977590da ("x86/alternatives: Disable interrupts and sync when optimizing NOPs in place")
While at it, add nomenclature definitions clarifying and simplifying the naming of function-local variables in the alternatives code.
Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
6a537453 |
| 01-Apr-2024 |
Joan Bruguera Micó <[email protected]> |
x86/bpf: Fix IP for relocating call depth accounting
The commit:
59bec00ace28 ("x86/percpu: Introduce %rip-relative addressing to PER_CPU_VAR()")
made PER_CPU_VAR() to use rip-relative addressin
x86/bpf: Fix IP for relocating call depth accounting
The commit:
59bec00ace28 ("x86/percpu: Introduce %rip-relative addressing to PER_CPU_VAR()")
made PER_CPU_VAR() to use rip-relative addressing, hence INCREMENT_CALL_DEPTH macro and skl_call_thunk_template got rip-relative asm code inside of it. A follow up commit:
17bce3b2ae2d ("x86/callthunks: Handle %rip-relative relocations in call thunk template")
changed x86_call_depth_emit_accounting() to use apply_relocation(), but mistakenly assumed that the code is being patched in-place (where the destination of the relocation matches the address of the code), using *pprog as the destination ip. This is not true for the call depth accounting, emitted by the BPF JIT, so the calculated address was wrong, JIT-ed BPF progs on kernels with call depth tracking got broken and usually caused a page fault.
Pass the destination IP when the BPF JIT emits call depth accounting.
Fixes: 17bce3b2ae2d ("x86/callthunks: Handle %rip-relative relocations in call thunk template") Signed-off-by: Joan Bruguera Micó <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Acked-by: Ingo Molnar <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
cad860b5 |
| 04-Mar-2024 |
Thomas Gleixner <[email protected]> |
x86/callthunks: Use EXPORT_PER_CPU_SYMBOL_GPL() for per CPU variables
Sparse complains rightfully about the usage of EXPORT_SYMBOL_GPL() for per CPU variables:
callthunks.c:346:20: sparse: warnin
x86/callthunks: Use EXPORT_PER_CPU_SYMBOL_GPL() for per CPU variables
Sparse complains rightfully about the usage of EXPORT_SYMBOL_GPL() for per CPU variables:
callthunks.c:346:20: sparse: warning: incorrect type in initializer (different address spaces) callthunks.c:346:20: sparse: expected void const [noderef] __percpu *__vpp_verify callthunks.c:346:20: sparse: got unsigned long long *
Use EXPORT_PER_CPU_SYMBOL_GPL() instead.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5 |
|
| #
60bc276b |
| 10-Dec-2023 |
Juergen Gross <[email protected]> |
x86/paravirt: Switch mixed paravirt/alternative calls to alternatives
Instead of stacking alternative and paravirt patching, use the new ALT_FLAG_CALL flag to switch those mixed calls to pure altern
x86/paravirt: Switch mixed paravirt/alternative calls to alternatives
Instead of stacking alternative and paravirt patching, use the new ALT_FLAG_CALL flag to switch those mixed calls to pure alternative handling.
Eliminate the need to be careful regarding the sequence of alternative and paravirt patching.
[ bp: Touch up commit message. ]
Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.7-rc4 |
|
| #
fc500653 |
| 01-Dec-2023 |
Uros Bizjak <[email protected]> |
x86/callthunks: Correct calculation of dest address in is_callthunk()
GCC didn't warn on the invalid use of relocation destination pointer, so the calculated destination value was applied to the uni
x86/callthunks: Correct calculation of dest address in is_callthunk()
GCC didn't warn on the invalid use of relocation destination pointer, so the calculated destination value was applied to the uninitialized pointer location in error.
Fixes: 17bce3b2ae2d ("x86/callthunks: Handle %rip-relative relocations in call thunk template") Reported-by: Nathan Chancellor <[email protected]> Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Closes: https://lore.kernel.org/lkml/[email protected]/ Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.7-rc3, v6.7-rc2, v6.7-rc1 |
|
| #
17bce3b2 |
| 05-Nov-2023 |
Uros Bizjak <[email protected]> |
x86/callthunks: Handle %rip-relative relocations in call thunk template
Contrary to alternatives, relocations are currently not supported in call thunk templates. Re-use the existing infrastructure
x86/callthunks: Handle %rip-relative relocations in call thunk template
Contrary to alternatives, relocations are currently not supported in call thunk templates. Re-use the existing infrastructure from alternative.c to allow %rip-relative relocations when copying call thunk template from its storage location.
The patch allows unification of ASM_INCREMENT_CALL_DEPTH, which already uses PER_CPU_VAR macro, with INCREMENT_CALL_DEPTH, used in call thunk template, which is currently limited to use absolute address.
Reuse existing relocation infrastructure from alternative.c., as suggested by Peter Zijlstra.
Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6 |
|
| #
5c22c472 |
| 09-Jun-2023 |
Hou Wenlong <[email protected]> |
x86/paravirt: Use relative reference for the original instruction offset
Similar to the alternative patching, use a relative reference for original instruction offset rather than absolute one, which
x86/paravirt: Use relative reference for the original instruction offset
Similar to the alternative patching, use a relative reference for original instruction offset rather than absolute one, which saves 8 bytes for one PARA_SITE entry on x86_64. As a result, a R_X86_64_PC32 relocation is generated instead of an R_X86_64_64 one, which also reduces relocation metadata on relocatable builds. Hardcode the alignment to 4 now.
[ bp: Massage commit message. ]
Signed-off-by: Hou Wenlong <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/9e6053107fbaabc0d33e5d2865c5af2c67ec9925.1686301237.git.houwenlong.hwl@antgroup.com
show more ...
|
| #
321a1451 |
| 14-Oct-2023 |
Alexey Dobriyan <[email protected]> |
x86/callthunks: Delete unused "struct thunk_desc"
It looks like it was never used.
Signed-off-by: Alexey Dobriyan <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by:
x86/callthunks: Delete unused "struct thunk_desc"
It looks like it was never used.
Signed-off-by: Alexey Dobriyan <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/843bf596-db67-4b33-a865-2bae4a4418e5@p183
show more ...
|
| #
aee9d30b |
| 22-Sep-2023 |
Peter Zijlstra <[email protected]> |
x86,static_call: Fix static-call vs return-thunk
Commit
7825451fa4dc ("static_call: Add call depth tracking support")
failed to realize the problem fixed there is not specific to call depth trac
x86,static_call: Fix static-call vs return-thunk
Commit
7825451fa4dc ("static_call: Add call depth tracking support")
failed to realize the problem fixed there is not specific to call depth tracking but applies to all return-thunk uses.
Move the fix to the appropriate place and condition.
Fixes: ee88d363d156 ("x86,static_call: Use alternative RET encoding") Reported-by: David Kaplan <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Tested-by: Borislav Petkov (AMD) <[email protected]> Cc: <[email protected]>
show more ...
|
| #
301cf77e |
| 09-Jun-2023 |
Ingo Molnar <[email protected]> |
x86/orc: Make the is_callthunk() definition depend on CONFIG_BPF_JIT=y
Recent commit:
020126239b8f Revert "x86/orc: Make it callthunk aware"
Made the only user of is_callthunk() depend on CONFIG
x86/orc: Make the is_callthunk() definition depend on CONFIG_BPF_JIT=y
Recent commit:
020126239b8f Revert "x86/orc: Make it callthunk aware"
Made the only user of is_callthunk() depend on CONFIG_BPF_JIT=y, while the definition of the helper function is unconditional.
Move is_callthunk() inside the #ifdef block.
Addresses this build failure:
arch/x86/kernel/callthunks.c:296:13: error: ‘is_callthunk’ defined but not used [-Werror=unused-function]
Signed-off-by: Ingo Molnar <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: [email protected] Cc: Peter Zijlstra <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc5, v6.4-rc4, v6.4-rc3 |
|
| #
02012623 |
| 16-May-2023 |
Josh Poimboeuf <[email protected]> |
Revert "x86/orc: Make it callthunk aware"
Commit 396e0b8e09e8 ("x86/orc: Make it callthunk aware") attempted to deal with the fact that function prefix code didn't have ORC coverage. However, it did
Revert "x86/orc: Make it callthunk aware"
Commit 396e0b8e09e8 ("x86/orc: Make it callthunk aware") attempted to deal with the fact that function prefix code didn't have ORC coverage. However, it didn't work as advertised. Use of the "null" ORC entry just caused affected unwinds to end early.
The root cause has now been fixed with commit 5743654f5e2e ("objtool: Generate ORC data for __pfx code").
Revert most of commit 396e0b8e09e8 ("x86/orc: Make it callthunk aware"). The is_callthunk() function remains as it's now used by other code.
Link: https://lore.kernel.org/r/a05b916ef941da872cbece1ab3593eceabd05a79.1684245404.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc2 |
|
| #
cded3679 |
| 12-May-2023 |
Thomas Gleixner <[email protected]> |
x86/smpboot: Restrict soft_restart_cpu() to SEV
Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra
x86/smpboot: Restrict soft_restart_cpu() to SEV
Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Tested-by: Guilherme G. Piccoli <[email protected]> # Steam Deck Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
666e1156 |
| 12-May-2023 |
Thomas Gleixner <[email protected]> |
x86/smpboot: Rename start_cpu0() to soft_restart_cpu()
This is used in the SEV play_dead() implementation to re-online CPUs. But that has nothing to do with CPU0.
Signed-off-by: Thomas Gleixner <tg
x86/smpboot: Rename start_cpu0() to soft_restart_cpu()
This is used in the SEV play_dead() implementation to re-online CPUs. But that has nothing to do with CPU0.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Tested-by: Guilherme G. Piccoli <[email protected]> # Steam Deck Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8 |
|
| #
ac3b4328 |
| 07-Feb-2023 |
Song Liu <[email protected]> |
module: replace module_layout with module_memory
module_layout manages different types of memory (text, data, rodata, etc.) in one allocation, which is problematic for some reasons:
1. It is hard t
module: replace module_layout with module_memory
module_layout manages different types of memory (text, data, rodata, etc.) in one allocation, which is problematic for some reasons:
1. It is hard to enable CONFIG_STRICT_MODULE_RWX. 2. It is hard to use huge pages in modules (and not break strict rwx). 3. Many archs uses module_layout for arch-specific data, but it is not obvious how these data are used (are they RO, RX, or RW?)
Improve the scenario by replacing 2 (or 3) module_layout per module with up to 7 module_memory per module:
MOD_TEXT, MOD_DATA, MOD_RODATA, MOD_RO_AFTER_INIT, MOD_INIT_TEXT, MOD_INIT_DATA, MOD_INIT_RODATA,
and allocating them separately. This adds slightly more entries to mod_tree (from up to 3 entries per module, to up to 7 entries per module). However, this at most adds a small constant overhead to __module_address(), which is expected to be fast.
Various archs use module_layout for different data. These data are put into different module_memory based on their location in module_layout. IOW, data that used to go with text is allocated with MOD_MEM_TYPE_TEXT; data that used to go with data is allocated with MOD_MEM_TYPE_DATA, etc.
module_memory simplifies quite some of the module code. For example, ARCH_WANTS_MODULES_DATA_IN_VMALLOC is a lot cleaner, as it just uses a different allocator for the data. kernel/module/strict_rwx.c is also much cleaner with module_memory.
Signed-off-by: Song Liu <[email protected]> Cc: Luis Chamberlain <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Guenter Roeck <[email protected]> Cc: Christophe Leroy <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1 |
|
| #
ade8c208 |
| 15-Dec-2022 |
Arnd Bergmann <[email protected]> |
x86/calldepth: Fix incorrect init section references
The addition of callthunks_translate_call_dest means that skip_addr() and patch_dest() can no longer be discarded as part of the __init section f
x86/calldepth: Fix incorrect init section references
The addition of callthunks_translate_call_dest means that skip_addr() and patch_dest() can no longer be discarded as part of the __init section freeing:
WARNING: modpost: vmlinux.o: section mismatch in reference: callthunks_translate_call_dest.cold (section: .text.unlikely) -> skip_addr (section: .init.text) WARNING: modpost: vmlinux.o: section mismatch in reference: callthunks_translate_call_dest.cold (section: .text.unlikely) -> patch_dest (section: .init.text) WARNING: modpost: vmlinux.o: section mismatch in reference: is_callthunk.cold (section: .text.unlikely) -> skip_addr (section: .init.text) ERROR: modpost: Section mismatches detected. Set CONFIG_SECTION_MISMATCH_WARN_ONLY=y to allow them.
Fixes: b2e9dfe54be4 ("x86/bpf: Emit call depth accounting if required") Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6 |
|
| #
ee3e2469 |
| 15-Sep-2022 |
Peter Zijlstra <[email protected]> |
x86/ftrace: Make it call depth tracking aware
Since ftrace has trampolines, don't use thunks for the __fentry__ site but instead require that every function called from there includes accounting. Th
x86/ftrace: Make it call depth tracking aware
Since ftrace has trampolines, don't use thunks for the __fentry__ site but instead require that every function called from there includes accounting. This very much includes all the direct-call functions.
Additionally, ftrace uses ROP tricks in two places:
- return_to_handler(), and - ftrace_regs_caller() when pt_regs->orig_ax is set by a direct-call.
return_to_handler() already uses a retpoline to replace an indirect-jump to defeat IBT, since this is a jump-type retpoline, make sure there is no accounting done and ALTERNATIVE the RET into a ret.
ftrace_regs_caller() does much the same and gets the same treatment.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b2e9dfe5 |
| 15-Sep-2022 |
Thomas Gleixner <[email protected]> |
x86/bpf: Emit call depth accounting if required
Ensure that calls in BPF jitted programs are emitting call depth accounting when enabled to keep the call/return balanced. The return thunk jump is al
x86/bpf: Emit call depth accounting if required
Ensure that calls in BPF jitted programs are emitting call depth accounting when enabled to keep the call/return balanced. The return thunk jump is already injected due to the earlier retbleed mitigations.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
396e0b8e |
| 15-Sep-2022 |
Peter Zijlstra <[email protected]> |
x86/orc: Make it callthunk aware
Callthunks addresses on the stack would confuse the ORC unwinder. Handle them correctly and tell ORC to proceed further down the stack.
Signed-off-by: Peter Zijlstr
x86/orc: Make it callthunk aware
Callthunks addresses on the stack would confuse the ORC unwinder. Handle them correctly and tell ORC to proceed further down the stack.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
7825451f |
| 15-Sep-2022 |
Peter Zijlstra <[email protected]> |
static_call: Add call depth tracking support
When indirect calls are switched to direct calls then it has to be ensured that the call target is not the function, but the call thunk when call depth t
static_call: Add call depth tracking support
When indirect calls are switched to direct calls then it has to be ensured that the call target is not the function, but the call thunk when call depth tracking is enabled. But static calls are available before call thunks have been set up.
Ensure a second run through the static call patching code after call thunks have been created. When call thunks are not enabled this has no side effects.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
f5c1bb2a |
| 15-Sep-2022 |
Thomas Gleixner <[email protected]> |
x86/calldepth: Add ret/call counting for debug
Add a debuigfs mechanism to validate the accounting, e.g. vs. call/ret balance and to gather statistics about the stuffing to call ratio.
Signed-off-b
x86/calldepth: Add ret/call counting for debug
Add a debuigfs mechanism to validate the accounting, e.g. vs. call/ret balance and to gather statistics about the stuffing to call ratio.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
bbaceb18 |
| 15-Sep-2022 |
Thomas Gleixner <[email protected]> |
x86/retbleed: Add SKL call thunk
Add the actual SKL call thunk for call depth accounting.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead
x86/retbleed: Add SKL call thunk
Add the actual SKL call thunk for call depth accounting.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|