|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5 |
|
| #
a75bf27f |
| 22-Jun-2024 |
Pawan Gupta <[email protected]> |
x86/its: Add support for ITS-safe return thunk
RETs in the lower half of cacheline may be affected by ITS bug, specifically when the RSB-underflows. Use ITS-safe return thunk for such RETs.
RETs th
x86/its: Add support for ITS-safe return thunk
RETs in the lower half of cacheline may be affected by ITS bug, specifically when the RSB-underflows. Use ITS-safe return thunk for such RETs.
RETs that are not patched:
- RET in retpoline sequence does not need to be patched, because the sequence itself fills an RSB before RET. - RET in Call Depth Tracking (CDT) thunks __x86_indirect_{call|jump}_thunk and call_depth_return_thunk are not patched because CDT by design prevents RSB-underflow. - RETs in .init section are not reachable after init. - RETs that are explicitly marked safe with ANNOTATE_UNRET_SAFE.
Signed-off-by: Pawan Gupta <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]>
show more ...
|
| #
8754e67a |
| 22-Jun-2024 |
Pawan Gupta <[email protected]> |
x86/its: Add support for ITS-safe indirect thunk
Due to ITS, indirect branches in the lower half of a cacheline may be vulnerable to branch target injection attack.
Introduce ITS-safe thunks to pat
x86/its: Add support for ITS-safe indirect thunk
Due to ITS, indirect branches in the lower half of a cacheline may be vulnerable to branch target injection attack.
Introduce ITS-safe thunks to patch indirect branches in the lower half of cacheline with the thunk. Also thunk any eBPF generated indirect branches in emit_indirect_jump().
Below category of indirect branches are not mitigated:
- Indirect branches in the .init section are not mitigated because they are discarded after boot. - Indirect branches that are explicitly marked retpoline-safe.
Note that retpoline also mitigates the indirect branches against ITS. This is because the retpoline sequence fills an RSB entry before RET, and it does not suffer from RSB-underflow part of the ITS.
Signed-off-by: Pawan Gupta <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]>
show more ...
|
| #
00a241f5 |
| 17-Apr-2025 |
Guenter Roeck <[email protected]> |
x86: disable image size check for test builds
64-bit allyesconfig builds fail with
x86_64-linux-ld: kernel image bigger than KERNEL_IMAGE_SIZE
Bisect points to commit 6f110a5e4f99 ("Disable SLUB_T
x86: disable image size check for test builds
64-bit allyesconfig builds fail with
x86_64-linux-ld: kernel image bigger than KERNEL_IMAGE_SIZE
Bisect points to commit 6f110a5e4f99 ("Disable SLUB_TINY for build testing") as the responsible commit. Reverting that patch does indeed fix the problem. Further analysis shows that disabling SLUB_TINY enables KASAN, and that KASAN is responsible for the image size increase.
Solve the build problem by disabling the image size check for test builds.
[[email protected]: add comment, fix nearby typo (sink->sync)] [[email protected]: fix comment snafu Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Link: https://lkml.kernel.org/r/[email protected] Fixes: 6f110a5e4f99 ("Disable SLUB_TINY for build testing") Signed-off-by: Guenter Roeck <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
6d536cad |
| 04-Mar-2025 |
Uros Bizjak <[email protected]> |
x86/percpu: Fix __per_cpu_hot_end marker
Make __per_cpu_hot_end marker point to the end of the percpu cache hot data, not to the end of the percpu cache hot section.
This fixes CONFIG_MPENTIUM4 cas
x86/percpu: Fix __per_cpu_hot_end marker
Make __per_cpu_hot_end marker point to the end of the percpu cache hot data, not to the end of the percpu cache hot section.
This fixes CONFIG_MPENTIUM4 case where X86_L1_CACHE_SHIFT is set to 7 (128 bytes).
Also update assert message accordingly.
Reported-by: Ingo Molnar <[email protected]> Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Brian Gerst <[email protected]> Link: https://lore.kernel.org/r/[email protected]
Closes: https://lore.kernel.org/lkml/[email protected]/
show more ...
|
| #
a1e4cc01 |
| 03-Mar-2025 |
Brian Gerst <[email protected]> |
x86/percpu: Move current_task to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <ubizjak
x86/percpu: Move current_task to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
385f72c8 |
| 03-Mar-2025 |
Brian Gerst <[email protected]> |
x86/percpu: Move top_of_stack to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <ubizjak
x86/percpu: Move top_of_stack to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
972f9cdf |
| 03-Mar-2025 |
Brian Gerst <[email protected]> |
x86/percpu: Move pcpu_hot to percpu hot section
Also change the alignment of the percpu hot section:
- PERCPU_SECTION(INTERNODE_CACHE_BYTES) + PERCPU_SECTION(L1_CACHE_BYTES)
As vSMP
x86/percpu: Move pcpu_hot to percpu hot section
Also change the alignment of the percpu hot section:
- PERCPU_SECTION(INTERNODE_CACHE_BYTES) + PERCPU_SECTION(L1_CACHE_BYTES)
As vSMP will muck with INTERNODE_CACHE_BYTES that invalidates the too-large-section assert we do:
ASSERT(__per_cpu_hot_end - __per_cpu_hot_start <= 64, "percpu cache hot section too large")
[ mingo: Added INTERNODE_CACHE_BYTES fix & explanation. ]
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
38a4968b |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
x86/percpu/64: Remove INIT_PER_CPU macros
Now that the load and link addresses of percpu variables are the same, these macros are no longer necessary.
Signed-off-by: Brian Gerst <[email protected]>
x86/percpu/64: Remove INIT_PER_CPU macros
Now that the load and link addresses of percpu variables are the same, these macros are no longer necessary.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b5c4f953 |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
x86/percpu/64: Remove fixed_percpu_data
Now that the stack protector canary value is a normal percpu variable, fixed_percpu_data is unused and can be removed.
Signed-off-by: Brian Gerst <brgerst@gm
x86/percpu/64: Remove fixed_percpu_data
Now that the stack protector canary value is a normal percpu variable, fixed_percpu_data is unused and can be removed.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
9d7de2aa |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
x86/percpu/64: Use relative percpu offsets
The percpu section is currently linked at absolute address 0, because older compilers hard-coded the stack protector canary value at a fixed offset from th
x86/percpu/64: Use relative percpu offsets
The percpu section is currently linked at absolute address 0, because older compilers hard-coded the stack protector canary value at a fixed offset from the start of the GS segment. Now that the canary is a normal percpu variable, the percpu section does not need to be linked at a specific address.
x86-64 will now calculate the percpu offsets as the delta between the initial percpu address and the dynamically allocated memory, like other architectures. Note that GSBASE is limited to the canonical address width (48 or 57 bits, sign-extended). As long as the kernel text, modules, and the dynamically allocated percpu memory are all in the negative address space, the delta will not overflow this limit.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
eeed9150 |
| 09-Jan-2025 |
Nathan Chancellor <[email protected]> |
x86/kexec: Fix location of relocate_kernel with -ffunction-sections
After commit
cb33ff9e063c ("x86/kexec: Move relocate_kernel to kernel .data section"),
kernels configured with an option that
x86/kexec: Fix location of relocate_kernel with -ffunction-sections
After commit
cb33ff9e063c ("x86/kexec: Move relocate_kernel to kernel .data section"),
kernels configured with an option that uses -ffunction-sections, such as CONFIG_LTO_CLANG, crash when kexecing because the value of relocate_kernel does not match the value of __relocate_kernel_start so incorrect code gets copied via machine_kexec_prepare().
$ llvm-nm good-vmlinux &| rg relocate_kernel ffffffff83280d41 T __relocate_kernel_end ffffffff83280b00 T __relocate_kernel_start ffffffff83280b00 T relocate_kernel
$ llvm-nm bad-vmlinux &| rg relocate_kernel ffffffff83266100 D __relocate_kernel_end ffffffff83266100 D __relocate_kernel_start ffffffff8120b0d8 T relocate_kernel
When -ffunction-sections is enabled, TEXT_MAIN matches on '.text.[0-9a-zA-Z_]*' to coalesce the function specific functions back into .text during link time after they have been optimized. Due to the placement of TEXT_TEXT before KEXEC_RELOCATE_KERNEL in the x86 linker script, the .text.relocate_kernel section ends up in .text instead of .data.
Use a second dot in the relocate_kernel section name to avoid matching on TEXT_MAIN, which matches a similar situation that happened in commit
79cd2a11224e ("x86/retpoline,kprobes: Fix position of thunk sections with CONFIG_LTO_CLANG"),
which allows kexec to function properly.
While .data.relocate_kernel still ends up in the .data section via DATA_MAIN -> DATA_DATA, ensure it is located with the .text.relocate_kernel section as intended by performing the same transformation.
Fixes: cb33ff9e063c ("x86/kexec: Move relocate_kernel to kernel .data section") Fixes: 8dbec5c77bc3 ("x86/kexec: Add data section to relocate_kernel") Signed-off-by: Nathan Chancellor <[email protected]> Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
7fa0da53 |
| 17-Oct-2024 |
Juergen Gross <[email protected]> |
x86/xen: remove hypercall page
The hypercall page is no longer needed. It can be removed, as from the Xen perspective it is optional.
But, from Linux's perspective, it removes naked RET instruction
x86/xen: remove hypercall page
The hypercall page is no longer needed. It can be removed, as from the Xen perspective it is optional.
But, from Linux's perspective, it removes naked RET instructions that escape the speculative protections that Call Depth Tracking and/or Untrain Ret are trying to achieve.
This is part of XSA-466 / CVE-2024-53241.
Reported-by: Andrew Cooper <[email protected]> Signed-off-by: Juergen Gross <[email protected]> Reviewed-by: Andrew Cooper <[email protected]> Reviewed-by: Jan Beulich <[email protected]>
show more ...
|
| #
8dbec5c7 |
| 05-Dec-2024 |
David Woodhouse <[email protected]> |
x86/kexec: Add data section to relocate_kernel
Now that the relocate_kernel page is handled sanely by a linker script we can have actual data, and just use %rip-relative addressing to access it.
Si
x86/kexec: Add data section to relocate_kernel
Now that the relocate_kernel page is handled sanely by a linker script we can have actual data, and just use %rip-relative addressing to access it.
Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Baoquan He <[email protected]> Cc: Vivek Goyal <[email protected]> Cc: Dave Young <[email protected]> Cc: Eric Biederman <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
cb33ff9e |
| 05-Dec-2024 |
David Woodhouse <[email protected]> |
x86/kexec: Move relocate_kernel to kernel .data section
Now that the copy is executed instead of the original, the relocate_kernel page can live in the kernel's .text section. This will allow subseq
x86/kexec: Move relocate_kernel to kernel .data section
Now that the copy is executed instead of the original, the relocate_kernel page can live in the kernel's .text section. This will allow subsequent commits to actually add real data to it and clean up the code somewhat as well as making the control page ROX.
Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Baoquan He <[email protected]> Cc: Vivek Goyal <[email protected]> Cc: Dave Young <[email protected]> Cc: Eric Biederman <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
a6a4ae9c |
| 05-Dec-2024 |
Ard Biesheuvel <[email protected]> |
x86/boot: Move .head.text into its own output section
In order to be able to double check that vmlinux is emitted without absolute symbol references in .head.text, it needs to be distinguishable fro
x86/boot: Move .head.text into its own output section
In order to be able to double check that vmlinux is emitted without absolute symbol references in .head.text, it needs to be distinguishable from the rest of .text in the ELF metadata.
So move .head.text into its own ELF section.
Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
35350eb6 |
| 05-Dec-2024 |
Ard Biesheuvel <[email protected]> |
x86/kernel: Move ENTRY_TEXT to the start of the image
Since commit:
7734a0f31e99 ("x86/boot: Robustify calling startup_{32,64}() from the decompressor code")
it is no longer necessary for .head.
x86/kernel: Move ENTRY_TEXT to the start of the image
Since commit:
7734a0f31e99 ("x86/boot: Robustify calling startup_{32,64}() from the decompressor code")
it is no longer necessary for .head.text to appear at the start of the image. Since ENTRY_TEXT needs to appear PMD-aligned, it is easier to just place it at the start of the image, rather than line it up with the end of the .text section. The amount of padding required should be the same, but this arrangement also permits .head.text to be split off and emitted separately, which is needed by a subsequent change.
Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
d5dc9583 |
| 02-Nov-2024 |
Rong Xu <[email protected]> |
kbuild: Add Propeller configuration for kernel build
Add the build support for using Clang's Propeller optimizer. Like AutoFDO, Propeller uses hardware sampling to gather information about the frequ
kbuild: Add Propeller configuration for kernel build
Add the build support for using Clang's Propeller optimizer. Like AutoFDO, Propeller uses hardware sampling to gather information about the frequency of execution of different code paths within a binary. This information is then used to guide the compiler's optimization decisions, resulting in a more efficient binary.
The support requires a Clang compiler LLVM 19 or later, and the create_llvm_prof tool (https://github.com/google/autofdo/releases/tag/v0.30.1). This commit is limited to x86 platforms that support PMU features like LBR on Intel machines and AMD Zen3 BRS.
Here is an example workflow for building an AutoFDO+Propeller optimized kernel:
1) Build the kernel on the host machine, with AutoFDO and Propeller build config CONFIG_AUTOFDO_CLANG=y CONFIG_PROPELLER_CLANG=y then $ make LLVM=1 CLANG_AUTOFDO_PROFILE=<autofdo_profile>
“<autofdo_profile>” is the profile collected when doing a non-Propeller AutoFDO build. This step builds a kernel that has the same optimization level as AutoFDO, plus a metadata section that records basic block information. This kernel image runs as fast as an AutoFDO optimized kernel.
2) Install the kernel on test/production machines.
3) Run the load tests. The '-c' option in perf specifies the sample event period. We suggest using a suitable prime number, like 500009, for this purpose. For Intel platforms: $ perf record -e BR_INST_RETIRED.NEAR_TAKEN:k -a -N -b -c <count> \ -o <perf_file> -- <loadtest> For AMD platforms: The supported system are: Zen3 with BRS, or Zen4 with amd_lbr_v2 # To see if Zen3 support LBR: $ cat proc/cpuinfo | grep " brs" # To see if Zen4 support LBR: $ cat proc/cpuinfo | grep amd_lbr_v2 # If the result is yes, then collect the profile using: $ perf record --pfm-events RETIRED_TAKEN_BRANCH_INSTRUCTIONS:k -a \ -N -b -c <count> -o <perf_file> -- <loadtest>
4) (Optional) Download the raw perf file to the host machine.
5) Generate Propeller profile: $ create_llvm_prof --binary=<vmlinux> --profile=<perf_file> \ --format=propeller --propeller_output_module_name \ --out=<propeller_profile_prefix>_cc_profile.txt \ --propeller_symorder=<propeller_profile_prefix>_ld_profile.txt
“create_llvm_prof” is the profile conversion tool, and a prebuilt binary for linux can be found on https://github.com/google/autofdo/releases/tag/v0.30.1 (can also build from source).
"<propeller_profile_prefix>" can be something like "/home/user/dir/any_string".
This command generates a pair of Propeller profiles: "<propeller_profile_prefix>_cc_profile.txt" and "<propeller_profile_prefix>_ld_profile.txt".
6) Rebuild the kernel using the AutoFDO and Propeller profile files. CONFIG_AUTOFDO_CLANG=y CONFIG_PROPELLER_CLANG=y and $ make LLVM=1 CLANG_AUTOFDO_PROFILE=<autofdo_profile> \ CLANG_PROPELLER_PROFILE_PREFIX=<propeller_profile_prefix>
Co-developed-by: Han Shen <[email protected]> Signed-off-by: Han Shen <[email protected]> Signed-off-by: Rong Xu <[email protected]> Suggested-by: Sriraman Tallam <[email protected]> Suggested-by: Krzysztof Pszeniczny <[email protected]> Suggested-by: Nick Desaulniers <[email protected]> Suggested-by: Stephane Eranian <[email protected]> Tested-by: Yonghong Song <[email protected]> Tested-by: Nathan Chancellor <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
show more ...
|
| #
577c134d |
| 05-Nov-2024 |
Ard Biesheuvel <[email protected]> |
x86/stackprotector: Work around strict Clang TLS symbol requirements
GCC and Clang both implement stack protector support based on Thread Local Storage (TLS) variables, and this is used in the kerne
x86/stackprotector: Work around strict Clang TLS symbol requirements
GCC and Clang both implement stack protector support based on Thread Local Storage (TLS) variables, and this is used in the kernel to implement per-task stack cookies, by copying a task's stack cookie into a per-CPU variable every time it is scheduled in.
Both now also implement -mstack-protector-guard-symbol=, which permits the TLS variable to be specified directly. This is useful because it will allow to move away from using a fixed offset of 40 bytes into the per-CPU area on x86_64, which requires a lot of special handling in the per-CPU code and the runtime relocation code.
However, while GCC is rather lax in its implementation of this command line option, Clang actually requires that the provided symbol name refers to a TLS variable (i.e., one declared with __thread), although it also permits the variable to be undeclared entirely, in which case it will use an implicit declaration of the right type.
The upshot of this is that Clang will emit the correct references to the stack cookie variable in most cases, e.g.,
10d: 64 a1 00 00 00 00 mov %fs:0x0,%eax 10f: R_386_32 __stack_chk_guard
However, if a non-TLS definition of the symbol in question is visible in the same compilation unit (which amounts to the whole of vmlinux if LTO is enabled), it will drop the per-CPU prefix and emit a load from a bogus address.
Work around this by using a symbol name that never occurs in C code, and emit it as an alias in the linker script.
Fixes: 3fb0fdb3bbe7 ("x86/stackprotector/32: Make the canary into a regular percpu variable") Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Nathan Chancellor <[email protected]> Tested-by: Nathan Chancellor <[email protected]> Cc: [email protected] Link: https://github.com/ClangBuiltLinux/linux/issues/1854 Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
7175126a |
| 10-Oct-2024 |
Thomas Weißschuh <[email protected]> |
x86/vdso: Allocate vvar page from C code
Allocate the vvar page through the standard union vdso_data_store and remove the custom linker script logic.
Signed-off-by: Thomas Weißschuh <thomas.weisssc
x86/vdso: Allocate vvar page from C code
Allocate the vvar page through the standard union vdso_data_store and remove the custom linker script logic.
Signed-off-by: Thomas Weißschuh <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
show more ...
|
| #
223abe96 |
| 09-Oct-2024 |
Ard Biesheuvel <[email protected]> |
x86/xen: Avoid relocatable quantities in Xen ELF notes
Xen puts virtual and physical addresses into ELF notes that are treated by the linker as relocatable by default. Doing so is not only pointless
x86/xen: Avoid relocatable quantities in Xen ELF notes
Xen puts virtual and physical addresses into ELF notes that are treated by the linker as relocatable by default. Doing so is not only pointless, given that the ELF notes are only intended for consumption by Xen before the kernel boots. It is also a KASLR leak, given that the kernel's ELF notes are exposed via the world readable /sys/kernel/notes.
So emit these constants in a way that prevents the linker from marking them as relocatable. This involves place-relative relocations (which subtract their own virtual address from the symbol value) and linker provided absolute symbols that add the address of the place to the desired value.
Tested-by: Jason Andryuk <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Jason Andryuk <[email protected]> Message-ID: <[email protected]> Signed-off-by: Juergen Gross <[email protected]>
show more ...
|
| #
86e6b154 |
| 24-Oct-2024 |
Linus Torvalds <[email protected]> |
x86: fix user address masking non-canonical speculation issue
It turns out that AMD has a "Meltdown Lite(tm)" issue with non-canonical accesses in kernel space. And so using just the high bit to de
x86: fix user address masking non-canonical speculation issue
It turns out that AMD has a "Meltdown Lite(tm)" issue with non-canonical accesses in kernel space. And so using just the high bit to decide whether an access is in user space or kernel space ends up with the good old "leak speculative data" if you have the right gadget using the result:
CVE-2020-12965 “Transient Execution of Non-Canonical Accesses“
Now, the kernel surrounds the access with a STAC/CLAC pair, and those instructions end up serializing execution on older Zen architectures, which closes the speculation window.
But that was true only up until Zen 5, which renames the AC bit [1]. That improves performance of STAC/CLAC a lot, but also means that the speculation window is now open.
Note that this affects not just the new address masking, but also the regular valid_user_address() check used by access_ok(), and the asm version of the sign bit check in the get_user() helpers.
It does not affect put_user() or clear_user() variants, since there's no speculative result to be used in a gadget for those operations.
Reported-by: Andrew Cooper <[email protected]> Link: https://lore.kernel.org/all/[email protected]/ Link: https://lore.kernel.org/all/20241023094448.GAZxjFkEOOF_DM83TQ@fat_crate.local/ [1] Link: https://www.amd.com/en/resources/product-security/bulletin/amd-sb-1010.html Link: https://arxiv.org/pdf/2108.10771 Cc: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Tested-by: Maciej Wieczor-Retman <[email protected]> # LAM case Fixes: 2865baf54077 ("x86: support user address masking instead of non-speculative conditional") Fixes: 6014bc27561f ("x86-64: make access_ok() independent of LAM") Fixes: b19b74bc99b1 ("x86/mm: Rework address range check in get_user() and put_user()") Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
92a10d38 |
| 30-Jul-2024 |
Jann Horn <[email protected]> |
runtime constants: move list of constants to vmlinux.lds.h
Refactor the list of constant variables into a macro. This should make it easier to add more constants in the future.
Signed-off-by: Jann
runtime constants: move list of constants to vmlinux.lds.h
Refactor the list of constant variables into a macro. This should make it easier to add more constants in the future.
Signed-off-by: Jann Horn <[email protected]> Reviewed-by: Alexander Gordeev <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc4 |
|
| #
e3c92e81 |
| 10-Jun-2024 |
Linus Torvalds <[email protected]> |
runtime constants: add x86 architecture support
This implements the runtime constant infrastructure for x86, allowing the dcache d_hash() function to be generated using as a constant for hash table
runtime constants: add x86 architecture support
This implements the runtime constant infrastructure for x86, allowing the dcache d_hash() function to be generated using as a constant for hash table address followed by shift by a constant of the hash index.
Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1 |
|
| #
8f69cba0 |
| 22-Mar-2024 |
Xin Li (Intel) <[email protected]> |
x86: Rename __{start,end}_init_task to __{start,end}_init_stack
The stack of a task has been separated from the memory of a task_struct struture for a long time on x86, as a result __{start,end}_ini
x86: Rename __{start,end}_init_task to __{start,end}_init_stack
The stack of a task has been separated from the memory of a task_struct struture for a long time on x86, as a result __{start,end}_init_task no longer mark the start and end of the init_task structure, but its stack only.
Rename __{start,end}_init_task to __{start,end}_init_stack.
Note other architectures are not affected because __{start,end}_init_task are used on x86 only.
Signed-off-by: Xin Li (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
2cb16181 |
| 21-Mar-2024 |
Brian Gerst <[email protected]> |
x86/boot: Simplify boot stack setup
Define the symbol __top_init_kernel_stack instead of duplicating the offset from __end_init_task in multiple places.
Signed-off-by: Brian Gerst <[email protected]
x86/boot: Simplify boot stack setup
Define the symbol __top_init_kernel_stack instead of duplicating the offset from __end_init_task in multiple places.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Kees Cook <[email protected]> Cc: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|