|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4 |
|
| #
e52c1dc7 |
| 23-Apr-2025 |
Peter Zijlstra <[email protected]> |
x86/its: FineIBT-paranoid vs ITS
FineIBT-paranoid was using the retpoline bytes for the paranoid check, disabling retpolines, because all parts that have IBT also have eIBRS and thus don't need no s
x86/its: FineIBT-paranoid vs ITS
FineIBT-paranoid was using the retpoline bytes for the paranoid check, disabling retpolines, because all parts that have IBT also have eIBRS and thus don't need no stinking retpolines.
Except... ITS needs the retpolines for indirect calls must not be in the first half of a cacheline :-/
So what was the paranoid call sequence:
<fineibt_paranoid_start>: 0: 41 ba 78 56 34 12 mov $0x12345678, %r10d 6: 45 3b 53 f7 cmp -0x9(%r11), %r10d a: 4d 8d 5b <f0> lea -0x10(%r11), %r11 e: 75 fd jne d <fineibt_paranoid_start+0xd> 10: 41 ff d3 call *%r11 13: 90 nop
Now becomes:
<fineibt_paranoid_start>: 0: 41 ba 78 56 34 12 mov $0x12345678, %r10d 6: 45 3b 53 f7 cmp -0x9(%r11), %r10d a: 4d 8d 5b f0 lea -0x10(%r11), %r11 e: 2e e8 XX XX XX XX cs call __x86_indirect_paranoid_thunk_r11
Where the paranoid_thunk looks like:
1d: <ea> (bad) __x86_indirect_paranoid_thunk_r11: 1e: 75 fd jne 1d __x86_indirect_its_thunk_r11: 20: 41 ff eb jmp *%r11 23: cc int3
[ dhansen: remove initialization to false ]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Pawan Gupta <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]>
show more ...
|
|
Revision tags: v6.15-rc3, v6.15-rc2 |
|
| #
fe1042b1 |
| 08-Apr-2025 |
Josh Poimboeuf <[email protected]> |
objtool: Split INSN_CONTEXT_SWITCH into INSN_SYSCALL and INSN_SYSRET
INSN_CONTEXT_SWITCH is ambiguous. It can represent both call semantics (SYSCALL, SYSENTER) and return semantics (SYSRET, IRET, R
objtool: Split INSN_CONTEXT_SWITCH into INSN_SYSCALL and INSN_SYSRET
INSN_CONTEXT_SWITCH is ambiguous. It can represent both call semantics (SYSCALL, SYSENTER) and return semantics (SYSRET, IRET, RETS, RETU). Those differ significantly: calls preserve control flow whereas returns terminate it.
Objtool uses an arbitrary rule for INSN_CONTEXT_SWITCH that almost works by accident: if in a function, keep going; otherwise stop. It should instead be based on the semantics of the underlying instruction.
In preparation for improving that, split INSN_CONTEXT_SWITCH into INSN_SYCALL and INSN_SYSRET.
No functional change.
Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/19a76c74d2c051d3bc9a775823cafc65ad267a7a.1744095216.git.jpoimboe@kernel.org
show more ...
|
|
Revision tags: v6.15-rc1 |
|
| #
3e7be635 |
| 01-Apr-2025 |
Josh Poimboeuf <[email protected]> |
objtool: Change "warning:" to "error: " for fatal errors
This is similar to GCC's behavior and makes it more obvious why the build failed.
Signed-off-by: Josh Poimboeuf <[email protected]> Signed
objtool: Change "warning:" to "error: " for fatal errors
This is similar to GCC's behavior and makes it more obvious why the build failed.
Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/0ea76f4b0e7a370711ed9f75fd0792bb5979c2bf.1743481539.git.jpoimboe@kernel.org
show more ...
|
|
Revision tags: v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3 |
|
| #
091bf313 |
| 11-Feb-2025 |
Tiezhu Yang <[email protected]> |
objtool: Handle different entry size of rodata
In the most cases, the entry size of rodata is 8 bytes because the relocation type is 64 bit. There are also 32 bit relocation types, the entry size of
objtool: Handle different entry size of rodata
In the most cases, the entry size of rodata is 8 bytes because the relocation type is 64 bit. There are also 32 bit relocation types, the entry size of rodata should be 4 bytes in this case.
Add an arch-specific function arch_reloc_size() to assign the entry size of rodata for x86, powerpc and LoongArch.
Signed-off-by: Tiezhu Yang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Acked-by: Huacai Chen <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc2 |
|
| #
ab9fea59 |
| 07-Feb-2025 |
Peter Zijlstra <[email protected]> |
x86/alternative: Simplify callthunk patching
Now that paravirt call patching is implemented using alternatives, it is possible to avoid having to patch the alternative sites by including the altinst
x86/alternative: Simplify callthunk patching
Now that paravirt call patching is implemented using alternatives, it is possible to avoid having to patch the alternative sites by including the altinstr_replacement calls in the call_sites list.
This means we're now stacking relative adjustments like so:
callthunks_patch_builtin_calls(): patches all function calls to target: func() -> func()-10 since the CALL accounting lives in the CALL_PADDING.
This explicitly includes .altinstr_replacement
alt_replace_call(): patches: x86_BUG() -> target()
this patching is done in a relative manner, and will preserve the above adjustment, meaning that with calldepth patching it will do: x86_BUG()-10 -> target()-10
apply_relocation(): does code relocation, and adjusts all RIP-relative instructions to the new location, also in a relative manner.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2 |
|
| #
ed1cb76e |
| 04-Oct-2024 |
Josh Poimboeuf <[email protected]> |
objtool: Detect non-relocated text references
When kernel IBT is enabled, objtool detects all text references in order to determine which functions can be indirectly branched to.
In text, such refe
objtool: Detect non-relocated text references
When kernel IBT is enabled, objtool detects all text references in order to determine which functions can be indirectly branched to.
In text, such references look like one of the following:
mov $0x0,%rax R_X86_64_32S .init.text+0x7e0a0 lea 0x0(%rip),%rax R_X86_64_PC32 autoremove_wake_function-0x4
Either way the function pointer is denoted by a relocation, so objtool just reads that.
However there are some "lea xxx(%rip)" cases which don't use relocations because they're referencing code in the same translation unit. Objtool doesn't have visibility to those.
The only currently known instances of that are a few hand-coded asm text references which don't actually need ENDBR. So it's not actually a problem at the moment.
However if we enable -fpie, the compiler would start generating them and there would definitely be bugs in the IBT sealing.
Detect non-relocated text references and handle them appropriately.
[ Note: I removed the manual static_call_tramp check -- that should already be handled by the noendbr check. ]
Reported-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5 |
|
| #
8e366d83 |
| 20-Jun-2024 |
Alexandre Chartre <[email protected]> |
objtool/x86: objtool can confuse memory and stack access
The encoding of an x86 instruction can include a ModR/M and a SIB (Scale-Index-Base) byte to describe the addressing mode of the instruction.
objtool/x86: objtool can confuse memory and stack access
The encoding of an x86 instruction can include a ModR/M and a SIB (Scale-Index-Base) byte to describe the addressing mode of the instruction.
objtool processes all addressing mode with a SIB base of 5 as having %rbp as the base register. However, a SIB base of 5 means that the effective address has either no base (if ModR/M mod is zero) or %rbp as the base (if ModR/M mod is 1 or 2). This can cause objtool to confuse an absolute address access with a stack operation.
For example, objtool will see the following instruction:
4c 8b 24 25 e0 ff ff mov 0xffffffffffffffe0,%r12
as a stack operation (i.e. similar to: mov -0x20(%rbp), %r12).
[Note that this kind of weird absolute address access is added by the compiler when using KASAN.]
If this perceived stack operation happens to reference the location where %r12 was pushed on the stack then the objtool validation will think that %r12 is being restored and this can cause a stack state mismatch.
This kind behavior was seen on xfs code, after a minor change (convert kmem_alloc() to kmalloc()):
>> fs/xfs/xfs.o: warning: objtool: xfs_da_grow_inode_int+0x6c1: stack state mismatch: reg1[12]=-2-48 reg2[12]=-1+0
Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Signed-off-by: Alexandre Chartre <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5 |
|
| #
cd19bab8 |
| 05-Dec-2023 |
H. Peter Anvin (Intel) <[email protected]> |
x86/objtool: Teach objtool about ERET[US]
Update the objtool decoder to know about the ERET[US] instructions (type INSN_CONTEXT_SWITCH).
Signed-off-by: H. Peter Anvin (Intel) <[email protected]> Signed
x86/objtool: Teach objtool about ERET[US]
Update the objtool decoder to know about the ERET[US] instructions (type INSN_CONTEXT_SWITCH).
Signed-off-by: H. Peter Anvin (Intel) <[email protected]> Signed-off-by: Xin Li <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Tested-by: Shan Kang <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
758a7430 |
| 01-Aug-2023 |
Ruan Jinjie <[email protected]> |
objtool: Use 'the fallthrough' pseudo-keyword
Replace the existing /* fallthrough */ comments with the new 'fallthrough' pseudo-keyword macro:
https://www.kernel.org/doc/html/v5.7/process/depreca
objtool: Use 'the fallthrough' pseudo-keyword
Replace the existing /* fallthrough */ comments with the new 'fallthrough' pseudo-keyword macro:
https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through
Signed-off-by: Ruan Jinjie <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected]
show more ...
|
| #
d025b7ba |
| 14-Aug-2023 |
Peter Zijlstra <[email protected]> |
x86/cpu: Rename original retbleed methods
Rename the original retbleed return thunk and untrain_ret to retbleed_return_thunk() and retbleed_untrain_ret().
No functional changes.
Suggested-by: Josh
x86/cpu: Rename original retbleed methods
Rename the original retbleed return thunk and untrain_ret to retbleed_return_thunk() and retbleed_untrain_ret().
No functional changes.
Suggested-by: Josh Poimboeuf <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
d43490d0 |
| 14-Aug-2023 |
Peter Zijlstra <[email protected]> |
x86/cpu: Clean up SRSO return thunk mess
Use the existing configurable return thunk. There is absolute no justification for having created this __x86_return_thunk alternative.
To clarify, the whole
x86/cpu: Clean up SRSO return thunk mess
Use the existing configurable return thunk. There is absolute no justification for having created this __x86_return_thunk alternative.
To clarify, the whole thing looks like:
Zen3/4 does:
srso_alias_untrain_ret: nop2 lfence jmp srso_alias_return_thunk int3
srso_alias_safe_ret: // aliasses srso_alias_untrain_ret just so add $8, %rsp ret int3
srso_alias_return_thunk: call srso_alias_safe_ret ud2
While Zen1/2 does:
srso_untrain_ret: movabs $foo, %rax lfence call srso_safe_ret (jmp srso_return_thunk ?) int3
srso_safe_ret: // embedded in movabs instruction add $8,%rsp ret int3
srso_return_thunk: call srso_safe_ret ud2
While retbleed does:
zen_untrain_ret: test $0xcc, %bl lfence jmp zen_return_thunk int3
zen_return_thunk: // embedded in the test instruction ret int3
Where Zen1/2 flush the BTB entry using the instruction decoder trick (test,movabs) Zen3/4 use BTB aliasing. SRSO adds a return sequence (srso_safe_ret()) which forces the function return instruction to speculate into a trap (UD2). This RET will then mispredict and execution will continue at the return site read from the top of the stack.
Pick one of three options at boot (evey function can only ever return once).
[ bp: Fixup commit message uarch details and add them in a comment in the code too. Add a comment about the srso_select_mitigation() dependency on retbleed_select_mitigation(). Add moar ifdeffery for 32-bit builds. Add a dummy srso_untrain_ret_alias() definition for 32-bit alternatives needing the symbol. ]
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
4ae68b26 |
| 14-Aug-2023 |
Peter Zijlstra <[email protected]> |
objtool/x86: Fix SRSO mess
Objtool --rethunk does two things:
- it collects all (tail) call's of __x86_return_thunk and places them into .return_sites. These are typically compiler generated, b
objtool/x86: Fix SRSO mess
Objtool --rethunk does two things:
- it collects all (tail) call's of __x86_return_thunk and places them into .return_sites. These are typically compiler generated, but RET also emits this same.
- it fudges the validation of the __x86_return_thunk symbol; because this symbol is inside another instruction, it can't actually find the instruction pointed to by the symbol offset and gets upset.
Because these two things pertained to the same symbol, there was no pressing need to separate these two separate things.
However, alas, along comes SRSO and more crazy things to deal with appeared.
The SRSO patch itself added the following symbol names to identify as rethunk:
'srso_untrain_ret', 'srso_safe_ret' and '__ret'
Where '__ret' is the old retbleed return thunk, 'srso_safe_ret' is a new similarly embedded return thunk, and 'srso_untrain_ret' is completely unrelated to anything the above does (and was only included because of that INT3 vs UD2 issue fixed previous).
Clear things up by adding a second category for the embedded instruction thing.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1 |
|
| #
fb3bd914 |
| 28-Jun-2023 |
Borislav Petkov (AMD) <[email protected]> |
x86/srso: Add a Speculative RAS Overflow mitigation
Add a mitigation for the speculative return address stack overflow vulnerability found on AMD processors.
The mitigation works by ensuring all RE
x86/srso: Add a Speculative RAS Overflow mitigation
Add a mitigation for the speculative return address stack overflow vulnerability found on AMD processors.
The mitigation works by ensuring all RET instructions speculate to a controlled location, similar to how speculation is controlled in the retpoline sequence. To accomplish this, the __x86_return_thunk forces the CPU to mispredict every function return using a 'safe return' sequence.
To ensure the safety of this mitigation, the kernel must ensure that the safe return sequence is itself free from attacker interference. In Zen3 and Zen4, this is accomplished by creating a BTB alias between the untraining function srso_untrain_ret_alias() and the safe return function srso_safe_ret_alias() which results in evicting a potentially poisoned BTB entry and using that safe one for all function returns.
In older Zen1 and Zen2, this is accomplished using a reinterpretation technique similar to Retbleed one: srso_untrain_ret() and srso_safe_ret().
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
show more ...
|
|
Revision tags: v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5 |
|
| #
0696b6e3 |
| 30-May-2023 |
Josh Poimboeuf <[email protected]> |
objtool: Get rid of reloc->addend
Get the addend from the embedded GElf_Rel[a] struct.
With allyesconfig + CONFIG_DEBUG_INFO:
- Before: peak heap memory consumption: 42.10G - After: peak heap mem
objtool: Get rid of reloc->addend
Get the addend from the embedded GElf_Rel[a] struct.
With allyesconfig + CONFIG_DEBUG_INFO:
- Before: peak heap memory consumption: 42.10G - After: peak heap memory consumption: 40.37G
Link: https://lore.kernel.org/r/ad2354f95d9ddd86094e3f7687acfa0750657784.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
| #
fcee899d |
| 30-May-2023 |
Josh Poimboeuf <[email protected]> |
objtool: Get rid of reloc->type
Get the type from the embedded GElf_Rel[a] struct.
Link: https://lore.kernel.org/r/d1c1f8da31e4f052a2478aea585fcf355cacc53a.1685464332.git.jpoimboe@kernel.org Signed
objtool: Get rid of reloc->type
Get the type from the embedded GElf_Rel[a] struct.
Link: https://lore.kernel.org/r/d1c1f8da31e4f052a2478aea585fcf355cacc53a.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8 |
|
| #
3ee88df1 |
| 08-Feb-2023 |
Peter Zijlstra <[email protected]> |
objtool: Make instruction::stack_ops a single-linked list
struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash;
objtool: Make instruction::stack_ops a single-linked list
struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ unsigned int len; /* 64 4 */ enum insn_type type; /* 68 4 */ long unsigned int immediate; /* 72 8 */ u16 dead_end:1; /* 80: 0 2 */ u16 ignore:1; /* 80: 1 2 */ u16 ignore_alts:1; /* 80: 2 2 */ u16 hint:1; /* 80: 3 2 */ u16 save:1; /* 80: 4 2 */ u16 restore:1; /* 80: 5 2 */ u16 retpoline_safe:1; /* 80: 6 2 */ u16 noendbr:1; /* 80: 7 2 */ u16 entry:1; /* 80: 8 2 */
/* XXX 7 bits hole, try to pack */
s8 instr; /* 82 1 */ u8 visited; /* 83 1 */
/* XXX 4 bytes hole, try to pack */
struct alt_group * alt_group; /* 88 8 */ struct symbol * call_dest; /* 96 8 */ struct instruction * jump_dest; /* 104 8 */ struct instruction * first_jump_src; /* 112 8 */ struct reloc * jump_table; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ struct reloc * reloc; /* 128 8 */ struct list_head alts; /* 136 16 */ struct symbol * sym; /* 152 8 */ - struct list_head stack_ops; /* 160 16 */ - struct cfi_state * cfi; /* 176 8 */ + struct stack_op * stack_ops; /* 160 8 */ + struct cfi_state * cfi; /* 168 8 */
- /* size: 184, cachelines: 3, members: 29 */ - /* sum members: 178, holes: 1, sum holes: 4 */ + /* size: 176, cachelines: 3, members: 29 */ + /* sum members: 170, holes: 1, sum holes: 4 */ /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 56 bytes */ + /* last cacheline: 48 bytes */ };
pre: 5:58.22 real, 226.69 user, 131.22 sys, 26221520 mem post: 5:58.50 real, 229.64 user, 128.65 sys, 26221520 mem
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Tested-by: Nathan Chancellor <[email protected]> # build only Tested-by: Thomas Weißschuh <[email protected]> # compile and run Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
20a55463 |
| 08-Feb-2023 |
Peter Zijlstra <[email protected]> |
objtool: Change arch_decode_instruction() signature
In preparation to changing struct instruction around a bit, avoid passing it's members by pointer and instead pass the whole thing.
A cleanup in
objtool: Change arch_decode_instruction() signature
In preparation to changing struct instruction around a bit, avoid passing it's members by pointer and instead pass the whole thing.
A cleanup in it's own right too.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Tested-by: Nathan Chancellor <[email protected]> # build only Tested-by: Thomas Weißschuh <[email protected]> # compile and run Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6 |
|
| #
4ca993d4 |
| 14-Nov-2022 |
Sathvika Vasireddy <[email protected]> |
objtool: Add arch specific function arch_ftrace_match()
Add architecture specific function to look for relocation records pointing to architecture specific symbols.
Suggested-by: Christophe Leroy <
objtool: Add arch specific function arch_ftrace_match()
Add architecture specific function to look for relocation records pointing to architecture specific symbols.
Suggested-by: Christophe Leroy <[email protected]> Tested-by: Naveen N. Rao <[email protected]> Reviewed-by: Naveen N. Rao <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Signed-off-by: Sathvika Vasireddy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6 |
|
| #
61c6065e |
| 15-Sep-2022 |
Peter Zijlstra <[email protected]> |
objtool: Allow !PC relative relocations
Objtool doesn't currently much like per-cpu usage in alternatives:
arch/x86/entry/entry_64.o: warning: objtool: .altinstr_replacement+0xf: unsupported reloca
objtool: Allow !PC relative relocations
Objtool doesn't currently much like per-cpu usage in alternatives:
arch/x86/entry/entry_64.o: warning: objtool: .altinstr_replacement+0xf: unsupported relocation in alternatives section f: 65 c7 04 25 00 00 00 00 00 00 00 80 movl $0x80000000,%gs:0x0 13: R_X86_64_32S __x86_call_depth
Since the R_X86_64_32S relocation is location invariant (it's computation doesn't include P - the address of the location itself), it can be trivially allowed.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc5 |
|
| #
7a7621df |
| 07-Sep-2022 |
Peter Zijlstra <[email protected]> |
objtool,x86: Teach decode about LOOP* instructions
When 'discussing' control flow Masami mentioned the LOOP* instructions and I realized objtool doesn't decode them properly.
As it turns out, these
objtool,x86: Teach decode about LOOP* instructions
When 'discussing' control flow Masami mentioned the LOOP* instructions and I realized objtool doesn't decode them properly.
As it turns out, these instructions are somewhat inefficient and as such unlikely to be emitted by the compiler (a few vmlinux.o checks can't find a single one) so this isn't critical, but still, best to decode them properly.
Reported-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3 |
|
| #
d9e9d230 |
| 14-Jun-2022 |
Peter Zijlstra <[email protected]> |
x86,objtool: Create .return_sites
Find all the return-thunk sites and record them in a .return_sites section such that the kernel can undo this.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infrad
x86,objtool: Create .return_sites
Find all the return-thunk sites and record them in a .return_sites section such that the kernel can undo this.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Signed-off-by: Borislav Petkov <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4 |
|
| #
2daf7fab |
| 18-Apr-2022 |
Josh Poimboeuf <[email protected]> |
objtool: Reorganize cmdline options
Split the existing options into two groups: actions, which actually do something; and options, which modify the actions in some way.
Also there's no need to have
objtool: Reorganize cmdline options
Split the existing options into two groups: actions, which actually do something; and options, which modify the actions in some way.
Also there's no need to have short flags for all the non-action options. Reserve short flags for the more important actions.
While at it:
- change a few of the short flags to be more intuitive
- make option descriptions more consistently descriptive
- sort options in the source like they are when printed
- move options to a global struct
Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Link: https://lkml.kernel.org/r/9dcaa752f83aca24b1b21f0b0eeb28a0c181c0b0.1650300597.git.jpoimboe@redhat.com
show more ...
|
|
Revision tags: v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8 |
|
| #
7d209d13 |
| 08-Mar-2022 |
Peter Zijlstra <[email protected]> |
objtool: Add IBT/ENDBR decoding
Intel IBT requires the target of any indirect CALL or JMP instruction to be the ENDBR instruction; optionally it allows those two instructions to have a NOTRACK prefi
objtool: Add IBT/ENDBR decoding
Intel IBT requires the target of any indirect CALL or JMP instruction to be the ENDBR instruction; optionally it allows those two instructions to have a NOTRACK prefix in order to avoid this requirement.
The kernel will not enable the use of NOTRACK, as such any occurence of it in compiler generated code should be flagged.
Teach objtool to Decode ENDBR instructions and WARN about NOTRACK prefixes.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4 |
|
| #
227a0655 |
| 07-Feb-2022 |
Fenghua Yu <[email protected]> |
tools/objtool: Check for use of the ENQCMD instruction in the kernel
The ENQCMD instruction implicitly accesses the PASID_MSR to fill in the pasid field of the descriptor being submitted to an accel
tools/objtool: Check for use of the ENQCMD instruction in the kernel
The ENQCMD instruction implicitly accesses the PASID_MSR to fill in the pasid field of the descriptor being submitted to an accelerator. But there is no precise (and stable across kernel changes) point at which the PASID_MSR is updated from the value for one task to the next.
Kernel code that uses accelerators must always use the ENQCMDS instruction which does not access the PASID_MSR.
Check for use of the ENQCMD instruction in the kernel and warn on its usage.
Signed-off-by: Fenghua Yu <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Peter Zijlstra <[email protected]>
show more ...
|
| #
6e3133d9 |
| 07-Feb-2022 |
Fenghua Yu <[email protected]> |
tools/objtool: Check for use of the ENQCMD instruction in the kernel
The ENQCMD instruction implicitly accesses the PASID_MSR to fill in the pasid field of the descriptor being submitted to an accel
tools/objtool: Check for use of the ENQCMD instruction in the kernel
The ENQCMD instruction implicitly accesses the PASID_MSR to fill in the pasid field of the descriptor being submitted to an accelerator. But there is no precise (and stable across kernel changes) point at which the PASID_MSR is updated from the value for one task to the next.
Kernel code that uses accelerators must always use the ENQCMDS instruction which does not access the PASID_MSR.
Check for use of the ENQCMD instruction in the kernel and warn on its usage.
Signed-off-by: Fenghua Yu <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|