|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6 |
|
| #
385f72c8 |
| 03-Mar-2025 |
Brian Gerst <[email protected]> |
x86/percpu: Move top_of_stack to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <ubizjak
x86/percpu: Move top_of_stack to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
c6a09180 |
| 03-Mar-2025 |
Brian Gerst <[email protected]> |
x86/irq: Move irq stacks to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <ubizjak@gmai
x86/irq: Move irq stacks to percpu hot section
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
535d9a82 |
| 04-Mar-2025 |
Thomas Gleixner <[email protected]> |
x86/cpu: Get rid of the smp_store_cpu_info() indirection
smp_store_cpu_info() is just a wrapper around identify_secondary_cpu() without further value.
Move the extra bits from smp_store_cpu_info()
x86/cpu: Get rid of the smp_store_cpu_info() indirection
smp_store_cpu_info() is just a wrapper around identify_secondary_cpu() without further value.
Move the extra bits from smp_store_cpu_info() into identify_secondary_cpu() and remove the wrapper.
[ darwi: Make it compile and fix up the xen/smp_pv.c instance ]
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
8b7e54b5 |
| 04-Mar-2025 |
Ahmed S. Darwish <[email protected]> |
x86/cpu: Simplify TLB entry count storage
Commit:
e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU")
introduced u16 "info" arrays for each TLB type.
Since 2012 and each array
x86/cpu: Simplify TLB entry count storage
Commit:
e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU")
introduced u16 "info" arrays for each TLB type.
Since 2012 and each array stores just one type of information: the number of TLB entries for its respective TLB type.
Replace such arrays with simple variables.
Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
399fd7a2 |
| 03-Mar-2025 |
Brian Gerst <[email protected]> |
x86/asm: Merge KSTK_ESP() implementations
Commit:
263042e4630a ("Save user RSP in pt_regs->sp on SYSCALL64 fastpath")
simplified the 64-bit implementation of KSTK_ESP() which is now identical to
x86/asm: Merge KSTK_ESP() implementations
Commit:
263042e4630a ("Save user RSP in pt_regs->sp on SYSCALL64 fastpath")
simplified the 64-bit implementation of KSTK_ESP() which is now identical to 32-bit. Merge them into a common definition.
No functional change.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7 |
|
| #
b8ce25df |
| 08-Jan-2025 |
David Kaplan <[email protected]> |
x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
Add AUTO mitigations for mds/taa/mmio/rfds to create consistent vulnerability handling. These AUTO mitigations will be turned into the appropria
x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
Add AUTO mitigations for mds/taa/mmio/rfds to create consistent vulnerability handling. These AUTO mitigations will be turned into the appropriate default mitigations in the <vuln>_select_mitigation() functions. Later, these will be used with the new attack vector controls to help select appropriate mitigations.
Signed-off-by: David Kaplan <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b5c4f953 |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
x86/percpu/64: Remove fixed_percpu_data
Now that the stack protector canary value is a normal percpu variable, fixed_percpu_data is unused and can be removed.
Signed-off-by: Brian Gerst <brgerst@gm
x86/percpu/64: Remove fixed_percpu_data
Now that the stack protector canary value is a normal percpu variable, fixed_percpu_data is unused and can be removed.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
9d7de2aa |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
x86/percpu/64: Use relative percpu offsets
The percpu section is currently linked at absolute address 0, because older compilers hard-coded the stack protector canary value at a fixed offset from th
x86/percpu/64: Use relative percpu offsets
The percpu section is currently linked at absolute address 0, because older compilers hard-coded the stack protector canary value at a fixed offset from the start of the GS segment. Now that the canary is a normal percpu variable, the percpu section does not need to be linked at a specific address.
x86-64 will now calculate the percpu offsets as the delta between the initial percpu address and the dynamically allocated memory, like other architectures. Note that GSBASE is limited to the canonical address width (48 or 57 bits, sign-extended). As long as the kernel text, modules, and the dynamically allocated percpu memory are all in the negative address space, the delta will not overflow this limit.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
80d47def |
| 23-Jan-2025 |
Brian Gerst <[email protected]> |
x86/stackprotector/64: Convert to normal per-CPU variable
Older versions of GCC fixed the location of the stack protector canary at %gs:40. This constraint forced the percpu section to be linked at
x86/stackprotector/64: Convert to normal per-CPU variable
Older versions of GCC fixed the location of the stack protector canary at %gs:40. This constraint forced the percpu section to be linked at absolute address 0 so that the canary could be the first data object in the percpu section. Supporting the zero-based percpu section requires additional code to handle relocations for RIP-relative references to percpu data, extra complexity to kallsyms, and workarounds for linker bugs due to the use of absolute symbols.
GCC 8.1 supports redefining where the canary is located, allowing it to become a normal percpu variable instead of at a fixed location. This removes the constraint that the percpu section must be zero-based.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Reviewed-by: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4 |
|
| #
efbcd61d |
| 17-Oct-2024 |
Juergen Gross <[email protected]> |
x86: make get_cpu_vendor() accessible from Xen code
In order to be able to differentiate between AMD and Intel based systems for very early hypercalls without having to rely on the Xen hypercall pag
x86: make get_cpu_vendor() accessible from Xen code
In order to be able to differentiate between AMD and Intel based systems for very early hypercalls without having to rely on the Xen hypercall page, make get_cpu_vendor() non-static.
Refactor early_cpu_init() for the same reason by splitting out the loop initializing cpu_devs() into an externally callable function.
This is part of XSA-466 / CVE-2024-53241.
Reported-by: Andrew Cooper <[email protected]> Signed-off-by: Juergen Gross <[email protected]>
show more ...
|
| #
e4b44434 |
| 15-Nov-2024 |
K Prateek Nayak <[email protected]> |
x86/topology: Introduce topology_logical_core_id()
On x86, topology_core_id() returns a unique core ID within the PKG domain. Looking at match_smt() suggests that a core ID just needs to be unique w
x86/topology: Introduce topology_logical_core_id()
On x86, topology_core_id() returns a unique core ID within the PKG domain. Looking at match_smt() suggests that a core ID just needs to be unique within a LLC domain. For use cases such as the core RAPL PMU, there exists a need for a unique core ID across the entire system with multiple PKG domains. Introduce topology_logical_core_id() to derive a unique core ID across the system.
Signed-off-by: K Prateek Nayak <[email protected]> Signed-off-by: Dhananjay Ugwekar <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Zhang Rui <[email protected]> Reviewed-by: "Gautham R. Shenoy" <[email protected]> Tested-by: K Prateek Nayak <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
45239ba3 |
| 25-Oct-2024 |
Pawan Gupta <[email protected]> |
x86/cpu: Add CPU type to struct cpuinfo_topology
Sometimes it is required to take actions based on if a CPU is a performance or efficiency core. As an example, intel_pstate driver uses the Intel cor
x86/cpu: Add CPU type to struct cpuinfo_topology
Sometimes it is required to take actions based on if a CPU is a performance or efficiency core. As an example, intel_pstate driver uses the Intel core-type to determine CPU scaling. Also, some CPU vulnerabilities only affect a specific CPU type, like RFDS only affects Intel Atom. Hybrid systems that have variants P+E, P-only(Core) and E-only(Atom), it is not straightforward to identify which variant is affected by a type specific vulnerability.
Such processors do have CPUID field that can uniquely identify them. Like, P+E, P-only and E-only enumerates CPUID.1A.CORE_TYPE identification, while P+E additionally enumerates CPUID.7.HYBRID. Based on this information, it is possible for boot CPU to identify if a system has mixed CPU types.
Add a new field hw_cpu_type to struct cpuinfo_topology that stores the hardware specific CPU type. This saves the overhead of IPIs to get the CPU type of a different CPU. CPU type is populated early in the boot process, before vulnerabilities are enumerated.
Signed-off-by: Pawan Gupta <[email protected]> Co-developed-by: Mario Limonciello <[email protected]> Signed-off-by: Mario Limonciello <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6 |
|
| #
6c09e3b4 |
| 26-Aug-2024 |
Mario Limonciello <[email protected]> |
x86/amd: Rename amd_get_highest_perf() to amd_get_boost_ratio_numerator()
The function name is ambiguous because it returns an intermediate value for calculating maximum frequency rather than the CP
x86/amd: Rename amd_get_highest_perf() to amd_get_boost_ratio_numerator()
The function name is ambiguous because it returns an intermediate value for calculating maximum frequency rather than the CPPC 'Highest Perf' register.
Rename the function to clarify its use and allow the function to return errors. Adjust the consumer in acpi-cpufreq to catch errors.
Reviewed-by: Gautham R. Shenoy <[email protected]> Signed-off-by: Mario Limonciello <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10 |
|
| #
a97756cb |
| 09-Jul-2024 |
Xin Li (Intel) <[email protected]> |
x86/fred: Enable FRED right after init_mem_mapping()
On 64-bit init_mem_mapping() relies on the minimal page fault handler provided by the early IDT mechanism. The real page fault handler is install
x86/fred: Enable FRED right after init_mem_mapping()
On 64-bit init_mem_mapping() relies on the minimal page fault handler provided by the early IDT mechanism. The real page fault handler is installed right afterwards into the IDT.
This is problematic on CPUs which have X86_FEATURE_FRED set because the real page fault handler retrieves the faulting address from the FRED exception stack frame and not from CR2, but that does obviously not work when FRED is not yet enabled in the CPU.
To prevent this enable FRED right after init_mem_mapping() without interrupt stacks. Those are enabled later in trap_init() after the CPU entry area is set up.
[ tglx: Encapsulate the FRED details ]
Fixes: 14619d912b65 ("x86/fred: FRED entry/exit and dispatch code") Reported-by: Hou Wenlong <[email protected]> Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Xin Li (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
show more ...
|
|
Revision tags: v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4 |
|
| #
501bd734 |
| 13-Jun-2024 |
Mateusz Guzik <[email protected]> |
x86/CPU/AMD: Always inline amd_clear_divider()
The routine is used on syscall exit and on non-AMD CPUs is guaranteed to be empty.
It probably does not need to be a function call even on CPUs which
x86/CPU/AMD: Always inline amd_clear_divider()
The routine is used on syscall exit and on non-AMD CPUs is guaranteed to be empty.
It probably does not need to be a function call even on CPUs which do need the mitigation.
[ bp: Make sure it is always inlined so that noinstr marking works. ]
Signed-off-by: Mateusz Guzik <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7 |
|
| #
02b670c1 |
| 29-Apr-2024 |
Linus Torvalds <[email protected]> |
x86/mm: Remove broken vsyscall emulation code from the page fault code
The syzbot-reported stack trace from hell in this discussion thread actually has three nested page faults:
https://lore.kern
x86/mm: Remove broken vsyscall emulation code from the page fault code
The syzbot-reported stack trace from hell in this discussion thread actually has three nested page faults:
https://lore.kernel.org/r/[email protected]
... and I think that's actually the important thing here:
- the first page fault is from user space, and triggers the vsyscall emulation.
- the second page fault is from __do_sys_gettimeofday(), and that should just have caused the exception that then sets the return value to -EFAULT
- the third nested page fault is due to _raw_spin_unlock_irqrestore() -> preempt_schedule() -> trace_sched_switch(), which then causes a BPF trace program to run, which does that bpf_probe_read_compat(), which causes that page fault under pagefault_disable().
It's quite the nasty backtrace, and there's a lot going on.
The problem is literally the vsyscall emulation, which sets
current->thread.sig_on_uaccess_err = 1;
and that causes the fixup_exception() code to send the signal *despite* the exception being caught.
And I think that is in fact completely bogus. It's completely bogus exactly because it sends that signal even when it *shouldn't* be sent - like for the BPF user mode trace gathering.
In other words, I think the whole "sig_on_uaccess_err" thing is entirely broken, because it makes any nested page-faults do all the wrong things.
Now, arguably, I don't think anybody should enable vsyscall emulation any more, but this test case clearly does.
I think we should just make the "send SIGSEGV" be something that the vsyscall emulation does on its own, not this broken per-thread state for something that isn't actually per thread.
The x86 page fault code actually tried to deal with the "incorrect nesting" by having that:
if (in_interrupt()) return;
which ignores the sig_on_uaccess_err case when it happens in interrupts, but as shown by this example, these nested page faults do not need to be about interrupts at all.
IOW, I think the only right thing is to remove that horrendously broken code.
The attached patch looks like the ObviouslyCorrect(tm) thing to do.
NOTE! This broken code goes back to this commit in 2011:
4fc3490114bb ("x86-64: Set siginfo and context on vsyscall emulation faults")
... and back then the reason was to get all the siginfo details right. Honestly, I do not for a moment believe that it's worth getting the siginfo details right here, but part of the commit says:
This fixes issues with UML when vsyscall=emulate.
... and so my patch to remove this garbage will probably break UML in this situation.
I do not believe that anybody should be running with vsyscall=emulate in 2024 in the first place, much less if you are doing things like UML. But let's see if somebody screams.
Reported-and-tested-by: [email protected] Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Jiri Olsa <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/CAHk-=wh9D6f7HUkDgZHKmDCHUQmp+Co89GP+b8+z+G56BKeyNg@mail.gmail.com
show more ...
|
|
Revision tags: v6.9-rc6, v6.9-rc5 |
|
| #
a9d0adce |
| 16-Apr-2024 |
Tony Luck <[email protected]> |
x86/cpu/vfm: Add/initialize x86_vfm field to struct cpuinfo_x86
Refactor struct cpuinfo_x86 so that the vendor, family, and model fields are overlaid in a union with a 32-bit field that combines all
x86/cpu/vfm: Add/initialize x86_vfm field to struct cpuinfo_x86
Refactor struct cpuinfo_x86 so that the vendor, family, and model fields are overlaid in a union with a 32-bit field that combines all three (together with a one byte reserved field in the upper byte).
This will make it easy, cheap, and reliable to check all three values at once.
See
https://lore.kernel.org/r/Zgr6kT8oULbnmEXx@agluck-desk3
for why the ordering is (low-to-high bits):
(vendor, family, model)
[ bp: Move comments over the line, add the backstory about the particular order of the fields. ]
Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1 |
|
| #
2cb16181 |
| 21-Mar-2024 |
Brian Gerst <[email protected]> |
x86/boot: Simplify boot stack setup
Define the symbol __top_init_kernel_stack instead of duplicating the offset from __end_init_task in multiple places.
Signed-off-by: Brian Gerst <[email protected]
x86/boot: Simplify boot stack setup
Define the symbol __top_init_kernel_stack instead of duplicating the offset from __end_init_task in multiple places.
Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Kees Cook <[email protected]> Cc: Uros Bizjak <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
a3ff5316 |
| 19-Mar-2024 |
Uros Bizjak <[email protected]> |
x86/asm: Remove %P operand modifier from altinstr asm templates
The "P" asm operand modifier is a x86 target-specific modifier.
For x86_64, when used with a symbol reference, the "%P" modifier emit
x86/asm: Remove %P operand modifier from altinstr asm templates
The "P" asm operand modifier is a x86 target-specific modifier.
For x86_64, when used with a symbol reference, the "%P" modifier emits "sym" instead of "sym(%rip)". This property is currently used to prevent %RIP-relative addressing in .altinstr sections.
%RIP-relative addresses are nowadays correctly handled in .altinstr sections, so remove %P operand modifier from altinstr asm templates.
Also note that unlike GCC, clang emits %rip-relative symbol reference with "P" asm operand modifier, so the patch also unifies symbol handling with both compilers.
No functional changes intended.
Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8 |
|
| #
c416b5ba |
| 04-Mar-2024 |
Xin Li (Intel) <[email protected]> |
x86/fred: Fix init_task thread stack pointer initialization
As TOP_OF_KERNEL_STACK_PADDING was defined as 0 on x86_64, it went unnoticed that the initialization of the .sp field in INIT_THREAD and s
x86/fred: Fix init_task thread stack pointer initialization
As TOP_OF_KERNEL_STACK_PADDING was defined as 0 on x86_64, it went unnoticed that the initialization of the .sp field in INIT_THREAD and some calculations in the low level startup code do not take the padding into account.
FRED enabled kernels require a 16 byte padding, which means that the init task initialization and the low level startup code use the wrong stack offset.
Subtract TOP_OF_KERNEL_STACK_PADDING in all affected places to adjust for this.
Fixes: 65c9cc9e2c14 ("x86/fred: Reserve space for the FRED stack frame") Fixes: 3adee777ad0d ("x86/smpboot: Remove initial_stack on 64-bit") Reported-by: kernel test robot <[email protected]> Signed-off-by: Xin Li (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Closes: https://lore.kernel.org/oe-lkp/[email protected] Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8-rc7 |
|
| #
35ce6492 |
| 28-Feb-2024 |
Thomas Gleixner <[email protected]> |
x86/idle: Select idle routine only once
The idle routine selection is done on every CPU bringup operation and has a guard in place which is effective after the first invocation, which is a pointless
x86/idle: Select idle routine only once
The idle routine selection is done on every CPU bringup operation and has a guard in place which is effective after the first invocation, which is a pointless exercise.
Invoke it once on the boot CPU and mark the related functions __init. The guard check has to stay as xen_set_default_idle() runs early.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/87edcu6vaq.ffs@tglx
show more ...
|
| #
71eb4893 |
| 04-Mar-2024 |
Thomas Gleixner <[email protected]> |
x86/percpu: Cure per CPU madness on UP
On UP builds Sparse complains rightfully about accesses to cpu_info with per CPU accessors:
cacheinfo.c:282:30: sparse: warning: incorrect type in initializ
x86/percpu: Cure per CPU madness on UP
On UP builds Sparse complains rightfully about accesses to cpu_info with per CPU accessors:
cacheinfo.c:282:30: sparse: warning: incorrect type in initializer (different address spaces) cacheinfo.c:282:30: sparse: expected void const [noderef] __percpu *__vpp_verify cacheinfo.c:282:30: sparse: got unsigned int *
The reason is that on UP builds cpu_info which is a per CPU variable on SMP is mapped to boot_cpu_info which is a regular variable. There is a hideous accessor cpu_data() which tries to hide this, but it's not sufficient as some places require raw accessors and generates worse code than the regular per CPU accessors.
Waste sizeof(struct x86_cpuinfo) memory on UP and provide the per CPU cpu_info unconditionally. This requires to update the CPU info on the boot CPU as SMP does. (Ab)use the weakly defined smp_prepare_boot_cpu() function and implement exactly that.
This allows to use regular per CPU accessors uncoditionally and paves the way to remove the cpu_data() hackery.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
5323922f |
| 04-Mar-2024 |
Thomas Gleixner <[email protected]> |
x86/msr: Add missing __percpu annotations
Sparse rightfully complains about using a plain pointer for per CPU accessors:
msr-smp.c:15:23: sparse: warning: incorrect type in initializer (different
x86/msr: Add missing __percpu annotations
Sparse rightfully complains about using a plain pointer for per CPU accessors:
msr-smp.c:15:23: sparse: warning: incorrect type in initializer (different address spaces) msr-smp.c:15:23: sparse: expected void const [noderef] __percpu *__vpp_verify msr-smp.c:15:23: sparse: got struct msr *
Add __percpu annotations to the related datastructure and function arguments to cure this. This also cures the related sparse warnings at the callsites in drivers/edac/amd64_edac.c.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
154fcf3a |
| 04-Mar-2024 |
Thomas Gleixner <[email protected]> |
x86/msr: Prepare for including <linux/percpu.h> into <asm/msr.h>
To clean up the per CPU insanity of UP which causes sparse to be rightfully unhappy and prevents the usage of the generic per CPU acc
x86/msr: Prepare for including <linux/percpu.h> into <asm/msr.h>
To clean up the per CPU insanity of UP which causes sparse to be rightfully unhappy and prevents the usage of the generic per CPU accessors on cpu_info it is necessary to include <linux/percpu.h> into <asm/msr.h>.
Including <linux/percpu.h> into <asm/msr.h> is impossible because it ends up in header dependency hell. The problem is that <asm/processor.h> includes <asm/msr.h>. The inclusion of <linux/percpu.h> results in a compile fail where the compiler cannot longer handle an include in <asm/cpufeature.h> which references boot_cpu_data which is defined in <asm/processor.h>.
The only reason why <asm/msr.h> is included in <asm/processor.h> are the set/get_debugctlmsr() inlines. They are defined there because <asm/processor.h> is such a nice dump ground for everything. In fact they belong obviously into <asm/debugreg.h>.
Move them to <asm/debugreg.h> and fix up the resulting damage which is just exposing the reliance on random include chains.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8-rc6, v6.8-rc5 |
|
| #
89b0f15f |
| 13-Feb-2024 |
Thomas Gleixner <[email protected]> |
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
Now that __num_cores_per_package and __num_threads_per_package are available, cpuinfo::x86_max_cores and the related math all over the place can b
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
Now that __num_cores_per_package and __num_threads_per_package are available, cpuinfo::x86_max_cores and the related math all over the place can be replaced with the ready to consume data.
Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|