|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5 |
|
| #
0c3beacf |
| 23-Oct-2024 |
Mike Rapoport (Microsoft) <[email protected]> |
asm-generic: introduce text-patching.h
Several architectures support text patching, but they name the header files that declare patching functions differently.
Make all such headers consistently na
asm-generic: introduce text-patching.h
Several architectures support text patching, but they name the header files that declare patching functions differently.
Make all such headers consistently named text-patching.h and add an empty header in asm-generic for architectures that do not support text patching.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport (Microsoft) <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> # m68k Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Tested-by: kdevops <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Borislav Petkov (AMD) <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Guo Ren <[email protected]> Cc: Helge Deller <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Johannes Berg <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: Liam R. Howlett <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Masami Hiramatsu (Google) <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Russell King <[email protected]> Cc: Song Liu <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Steven Rostedt (Google) <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Uladzislau Rezki (Sony) <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3 |
|
| #
f796c758 |
| 30-Jan-2024 |
Borislav Petkov (AMD) <[email protected]> |
x86/alternatives: Use a temporary buffer when optimizing NOPs
Instead of optimizing NOPs in-place, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to gr
x86/alternatives: Use a temporary buffer when optimizing NOPs
Instead of optimizing NOPs in-place, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to grab locks when patching, see
6778977590da ("x86/alternatives: Disable interrupts and sync when optimizing NOPs in place")
While at it, add nomenclature definitions clarifying and simplifying the naming of function-local variables in the alternatives code.
Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5 |
|
| #
f7af6977 |
| 10-Dec-2023 |
Juergen Gross <[email protected]> |
x86/paravirt: Remove no longer needed paravirt patching code
Now that paravirt is using the alternatives patching infrastructure, remove the paravirt patching code.
Signed-off-by: Juergen Gross <jg
x86/paravirt: Remove no longer needed paravirt patching code
Now that paravirt is using the alternatives patching infrastructure, remove the paravirt patching code.
Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.7-rc4 |
|
| #
6724ba89 |
| 30-Nov-2023 |
Ingo Molnar <[email protected]> |
x86/callthunks: Mark apply_relocation() as __init_or_module
Do it like the rest of the methods using it.
Signed-off-by: Ingo Molnar <[email protected]> Cc: Uros Bizjak <[email protected]> Link: http
x86/callthunks: Mark apply_relocation() as __init_or_module
Do it like the rest of the methods using it.
Signed-off-by: Ingo Molnar <[email protected]> Cc: Uros Bizjak <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.7-rc3, v6.7-rc2, v6.7-rc1 |
|
| #
17bce3b2 |
| 05-Nov-2023 |
Uros Bizjak <[email protected]> |
x86/callthunks: Handle %rip-relative relocations in call thunk template
Contrary to alternatives, relocations are currently not supported in call thunk templates. Re-use the existing infrastructure
x86/callthunks: Handle %rip-relative relocations in call thunk template
Contrary to alternatives, relocations are currently not supported in call thunk templates. Re-use the existing infrastructure from alternative.c to allow %rip-relative relocations when copying call thunk template from its storage location.
The patch allows unification of ASM_INCREMENT_CALL_DEPTH, which already uses PER_CPU_VAR macro, with INCREMENT_CALL_DEPTH, used in call thunk template, which is currently limited to use absolute address.
Reuse existing relocation infrastructure from alternative.c., as suggested by Peter Zijlstra.
Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6 |
|
| #
db7adcfd |
| 23-Jan-2023 |
Peter Zijlstra <[email protected]> |
x86/alternatives: Introduce int3_emulate_jcc()
Move the kprobe Jcc emulation into int3_emulate_jcc() so it can be used by more code -- specifically static_call() will need this.
Signed-off-by: Pete
x86/alternatives: Introduce int3_emulate_jcc()
Move the kprobe Jcc emulation into int3_emulate_jcc() so it can be used by more code -- specifically static_call() will need this.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6 |
|
| #
fe54d079 |
| 15-Sep-2022 |
Thomas Gleixner <[email protected]> |
x86/alternatives: Provide text_poke_copy_locked()
The upcoming call thunk patching must hold text_mutex and needs access to text_poke_copy(), which takes text_mutex.
Provide a _locked postfixed var
x86/alternatives: Provide text_poke_copy_locked()
The upcoming call thunk patching must hold text_mutex and needs access to text_poke_copy(), which takes text_mutex.
Provide a _locked postfixed variant to expose the inner workings.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18 |
|
| #
aadd1b67 |
| 20-May-2022 |
Song Liu <[email protected]> |
x86/alternative: Introduce text_poke_set
Introduce a memset like API for text_poke. This will be used to fill the unused RX memory with illegal instructions.
Suggested-by: Peter Zijlstra (Intel) <p
x86/alternative: Introduce text_poke_set
Introduce a memset like API for text_poke. This will be used to fill the unused RX memory with illegal instructions.
Suggested-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
|
Revision tags: v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8 |
|
| #
ba27d1a8 |
| 08-Mar-2022 |
Peter Zijlstra <[email protected]> |
x86/ibt,paravirt: Use text_gen_insn() for paravirt_patch()
Less duplication is more better.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]
x86/ibt,paravirt: Use text_gen_insn() for paravirt_patch()
Less duplication is more better.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
bbf92368 |
| 08-Mar-2022 |
Peter Zijlstra <[email protected]> |
x86/text-patching: Make text_gen_insn() play nice with ANNOTATE_NOENDBR
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore
x86/text-patching: Make text_gen_insn() play nice with ANNOTATE_NOENDBR
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3 |
|
| #
0e06b403 |
| 04-Feb-2022 |
Song Liu <[email protected]> |
x86/alternative: Introduce text_poke_copy
This will be used by BPF jit compiler to dump JITed binary to a RX huge page, and thus allow multiple BPF programs sharing the a huge (2MB) page.
Signed-of
x86/alternative: Introduce text_poke_copy
This will be used by BPF jit compiler to dump JITed binary to a RX huge page, and thus allow multiple BPF programs sharing the a huge (2MB) page.
Signed-off-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
|
Revision tags: v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2 |
|
| #
c43a43e4 |
| 18-Aug-2020 |
Peter Zijlstra <[email protected]> |
x86/alternatives: Teach text_poke_bp() to emulate RET
Future patches will need to poke a RET instruction, provide the infrastructure required for this.
Signed-off-by: Peter Zijlstra (Intel) <peterz
x86/alternatives: Teach text_poke_bp() to emulate RET
Future patches will need to poke a RET instruction, provide the infrastructure required for this.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Steven Rostedt (VMware) <[email protected]> Cc: Masami Hiramatsu <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5 |
|
| #
4979fb53 |
| 21-Jan-2020 |
Thomas Gleixner <[email protected]> |
x86/int3: Ensure that poke_int3_handler() is not traced
In order to ensure poke_int3_handler() is completely self contained -- this is called while modifying other text, imagine the fun of hitting a
x86/int3: Ensure that poke_int3_handler() is not traced
In order to ensure poke_int3_handler() is completely self contained -- this is called while modifying other text, imagine the fun of hitting another INT3 -- ensure that everything it uses is not traced.
The primary means here is to force inlining; bsearch() is notrace because all of lib/ is.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3 |
|
| #
5c02ece8 |
| 09-Oct-2019 |
Peter Zijlstra <[email protected]> |
x86/kprobes: Fix ordering while text-patching
Kprobes does something like:
register: arch_arm_kprobe() text_poke(INT3) /* guarantees nothing, INT3 will become visible at some point, m
x86/kprobes: Fix ordering while text-patching
Kprobes does something like:
register: arch_arm_kprobe() text_poke(INT3) /* guarantees nothing, INT3 will become visible at some point, maybe */
kprobe_optimizer() /* guarantees the bytes after INT3 are unused */ synchronize_rcu_tasks(); text_poke_bp(JMP32); /* implies IPI-sync, kprobe really is enabled */
unregister: __disarm_kprobe() unoptimize_kprobe() text_poke_bp(INT3 + tail); /* implies IPI-sync, so tail is guaranteed visible */ arch_disarm_kprobe() text_poke(old); /* guarantees nothing, old will maybe become visible */
synchronize_rcu()
free-stuff
Now the problem is that on register, the synchronize_rcu_tasks() does not imply sufficient to guarantee all CPUs have already observed INT3 (although in practice this is exceedingly unlikely not to have happened) (similar to how MEMBARRIER_CMD_PRIVATE_EXPEDITED does not imply MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE).
Worse, even if it did, we'd have to do 2 synchronize calls to provide the guarantee we're looking for, the first to ensure INT3 is visible, the second to guarantee nobody is then still using the instruction bytes after INT3.
Similar on unregister; the synchronize_rcu() between __unregister_kprobe_top() and __unregister_kprobe_bottom() does not guarantee all CPUs are free of the INT3 (and observe the old text).
Therefore, sprinkle some IPI-sync love around. This guarantees that all CPUs agree on the text and RCU once again provides the required guaranteed.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Mathieu Desnoyers <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
ab09e95c |
| 09-Oct-2019 |
Peter Zijlstra <[email protected]> |
x86/kprobes: Convert to text-patching.h
Convert kprobes to the new text-poke naming.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-o
x86/kprobes: Convert to text-patching.h
Convert kprobes to the new text-poke naming.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
67c1d4a2 |
| 09-Oct-2019 |
Peter Zijlstra <[email protected]> |
x86/ftrace: Use text_gen_insn()
Replace the ftrace_code_union with the generic text_gen_insn() helper, which does exactly this.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rost
x86/ftrace: Use text_gen_insn()
Replace the ftrace_code_union with the generic text_gen_insn() helper, which does exactly this.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
254d2c04 |
| 09-Oct-2019 |
Peter Zijlstra <[email protected]> |
x86/alternative: Add text_opcode_size()
Introduce a common helper to map *_INSN_OPCODE to *_INSN_SIZE.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <rostedt@goo
x86/alternative: Add text_opcode_size()
Introduce a common helper to map *_INSN_OPCODE to *_INSN_SIZE.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v5.4-rc2 |
|
| #
63f62add |
| 03-Oct-2019 |
Peter Zijlstra <[email protected]> |
x86/alternatives: Add and use text_gen_insn() helper
Provide a simple helper function to create common instruction encodings.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rosted
x86/alternatives: Add and use text_gen_insn() helper
Provide a simple helper function to create common instruction encodings.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel Bristot de Oliveira <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7 |
|
| #
18cbc8be |
| 26-Aug-2019 |
Peter Zijlstra <[email protected]> |
x86/alternatives, jump_label: Provide better text_poke() batching interface
Adding another text_poke_bp_batch() user made me realize the interface is all sorts of wrong. The text poke vector should
x86/alternatives, jump_label: Provide better text_poke() batching interface
Adding another text_poke_bp_batch() user made me realize the interface is all sorts of wrong. The text poke vector should be internal to the implementation.
This then results in a trivial interface:
text_poke_queue() - which has the 'normal' text_poke_bp() interface text_poke_finish() - which takes no arguments and flushes any pending text_poke()s.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Reviewed-by: Daniel Bristot de Oliveira <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
8f4a4160 |
| 03-Sep-2019 |
Peter Zijlstra <[email protected]> |
x86/alternatives: Update int3_emulate_push() comment
Update the comment now that we've merged x86_32 support.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <rost
x86/alternatives: Update int3_emulate_push() comment
Update the comment now that we've merged x86_32 support.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
c3d6324f |
| 05-Jun-2019 |
Peter Zijlstra <[email protected]> |
x86/alternatives: Teach text_poke_bp() to emulate instructions
In preparation for static_call and variable size jump_label support, teach text_poke_bp() to emulate instructions, namely:
JMP32, JM
x86/alternatives: Teach text_poke_bp() to emulate instructions
In preparation for static_call and variable size jump_label support, teach text_poke_bp() to emulate instructions, namely:
JMP32, JMP8, CALL, NOP2, NOP_ATOMIC5, INT3
The current text_poke_bp() takes a @handler argument which is used as a jump target when the temporary INT3 is hit by a different CPU.
When patching CALL instructions, this doesn't work because we'd miss the PUSH of the return address. Instead, teach poke_int3_handler() to emulate an instruction, typically the instruction we're patching in.
This fits almost all text_poke_bp() users, except arch_unoptimize_kprobe() which restores random text, and for that site we have to build an explicit emulate instruction.
Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Reviewed-by: Daniel Bristot de Oliveira <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]> (cherry picked from commit 8c7eebc10687af45ac8e40ad1bac0cf7893dba9f) Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
32b1cbe3 |
| 02-Sep-2019 |
Marco Ammon <[email protected]> |
x86: Correct misc typos
Correct spelling typos in comments in different files under arch/x86/.
[ bp: Merge into a single patch, massage. ]
Signed-off-by: Marco Ammon <[email protected]> Signed-o
x86: Correct misc typos
Correct spelling typos in comments in different files under arch/x86/.
[ bp: Merge into a single patch, massage. ]
Signed-off-by: Marco Ammon <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Daniel Bristot de Oliveira <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Nadav Amit <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Pu Wen <[email protected]> Cc: Rick Edgecombe <[email protected]> Cc: "Steven Rostedt (VMware)" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.2-rc3, v5.2-rc2, v5.2-rc1 |
|
| #
faeedb06 |
| 08-May-2019 |
Peter Zijlstra <[email protected]> |
x86/stackframe/32: Allow int3_emulate_push()
Now that x86_32 has an unconditional gap on the kernel stack frame, the int3_emulate_push() thing will work without further changes.
Signed-off-by: Pete
x86/stackframe/32: Allow int3_emulate_push()
Now that x86_32 has an unconditional gap on the kernel stack frame, the int3_emulate_push() thing will work without further changes.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
c0213b0a |
| 12-Jun-2019 |
Daniel Bristot de Oliveira <[email protected]> |
x86/alternative: Batch of patch operations
Currently, the patch of an address is done in three steps:
-- Pseudo-code #1 - Current implementation ---
1) add an int3 trap to the address that
x86/alternative: Batch of patch operations
Currently, the patch of an address is done in three steps:
-- Pseudo-code #1 - Current implementation ---
1) add an int3 trap to the address that will be patched sync cores (send IPI to all other CPUs) 2) update all but the first byte of the patched range sync cores (send IPI to all other CPUs) 3) replace the first byte (int3) by the first byte of replacing opcode sync cores (send IPI to all other CPUs)
-- Pseudo-code #1 ---
When a static key has more than one entry, these steps are called once for each entry. The number of IPIs then is linear with regard to the number 'n' of entries of a key: O(n*3), which is O(n).
This algorithm works fine for the update of a single key. But we think it is possible to optimize the case in which a static key has more than one entry. For instance, the sched_schedstats jump label has 56 entries in my (updated) fedora kernel, resulting in 168 IPIs for each CPU in which the thread that is enabling the key is _not_ running.
With this patch, rather than receiving a single patch to be processed, a vector of patches is passed, enabling the rewrite of the pseudo-code #1 in this way:
-- Pseudo-code #2 - This patch --- 1) for each patch in the vector: add an int3 trap to the address that will be patched
sync cores (send IPI to all other CPUs)
2) for each patch in the vector: update all but the first byte of the patched range
sync cores (send IPI to all other CPUs)
3) for each patch in the vector: replace the first byte (int3) by the first byte of replacing opcode
sync cores (send IPI to all other CPUs) -- Pseudo-code #2 - This patch ---
Doing the update in this way, the number of IPI becomes O(3) with regard to the number of keys, which is O(1).
The batch mode is done with the function text_poke_bp_batch(), that receives two arguments: a vector of "struct text_to_poke", and the number of entries in the vector.
The vector must be sorted by the addr field of the text_to_poke structure, enabling the binary search of a handler in the poke_int3_handler function (a fast path).
Signed-off-by: Daniel Bristot de Oliveira <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Chris von Recklinghausen <[email protected]> Cc: Clark Williams <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Jason Baron <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Scott Wood <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/ca506ed52584c80f64de23f6f55ca288e5d079de.1560325897.git.bristot@redhat.com Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
693713cb |
| 11-May-2019 |
Steven Rostedt (VMware) <[email protected]> |
x86: Hide the int3_emulate_call/jmp functions from UML
User Mode Linux does not have access to the ip or sp fields of the pt_regs, and accessing them causes UML to fail to build. Hide the int3_emula
x86: Hide the int3_emulate_call/jmp functions from UML
User Mode Linux does not have access to the ip or sp fields of the pt_regs, and accessing them causes UML to fail to build. Hide the int3_emulate_jmp() and int3_emulate_call() instructions from UML, as it doesn't need them anyway.
Reported-by: kbuild test robot <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
show more ...
|