|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3 |
|
| #
5214a9f6 |
| 14-Apr-2025 |
Borislav Petkov (AMD) <[email protected]> |
x86/microcode: Consolidate the loader enablement checking
Consolidate the whole logic which determines whether the microcode loader should be enabled or not into a single function and call it everyw
x86/microcode: Consolidate the loader enablement checking
Consolidate the whole logic which determines whether the microcode loader should be enabled or not into a single function and call it everywhere.
Well, almost everywhere - not in mk_early_pgtbl_32() because there the kernel is running without paging enabled and checking dis_ucode_ldr et al would require physical addresses and uglification of the code.
But since this is 32-bit, the easier thing to do is to simply map the initrd unconditionally especially since that mapping is getting removed later anyway by zap_early_initrd_mapping() and avoid the uglification.
In doing so, address the issue of old 486er machines without CPUID support, not booting current kernels.
[ mingo: Fix no previous prototype for ‘microcode_loader_disabled’ [-Wmissing-prototypes] ]
Fixes: 4c585af7180c1 ("x86/boot/32: Temporarily map initrd for microcode loading") Signed-off-by: Borislav Petkov (AMD) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Cc: <[email protected]> Link: https://lore.kernel.org/r/CANpbe9Wm3z8fy9HbgS8cuhoj0TREYEEkBipDuhgkWFvqX0UoVQ@mail.gmail.com
show more ...
|
|
Revision tags: v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4 |
|
| #
7e6b0a2e |
| 19-Feb-2025 |
Sohil Mehta <[email protected]> |
x86/microcode: Update the Intel processor flag scan check
The Family model check to read the processor flag MSR is misleading and potentially incorrect. It doesn't consider Family while comparing th
x86/microcode: Update the Intel processor flag scan check
The Family model check to read the processor flag MSR is misleading and potentially incorrect. It doesn't consider Family while comparing the model number. The original check did have a Family number but it got lost/moved during refactoring.
intel_collect_cpu_info() is called through multiple paths such as early initialization, CPU hotplug as well as IFS image load. Some of these flows would be error prone due to the ambiguous check.
Correct the processor flag scan check to use a Family number and update it to a VFM based one to make it more readable.
Signed-off-by: Sohil Mehta <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2 |
|
| #
9a819753 |
| 01-Oct-2024 |
Chang S. Bae <[email protected]> |
x86/microcode/intel: Remove unnecessary cache writeback and invalidation
Currently, an unconditional cache flush is performed during every microcode update. Although the original changelog did not m
x86/microcode/intel: Remove unnecessary cache writeback and invalidation
Currently, an unconditional cache flush is performed during every microcode update. Although the original changelog did not mention a specific erratum, this measure was primarily intended to address a specific microcode bug, the load of which has already been blocked by is_blacklisted(). Therefore, this cache flush is no longer necessary.
Additionally, the side effects of doing this have been overlooked. It increases CPU rendezvous time during late loading, where the cache flush takes between 1x to 3.5x longer than the actual microcode update.
Remove native_wbinvd() and update the erratum name to align with the latest errata documentation, document ID 334163 Version 022US.
[ bp: Zap the flaky documentation URL. ]
Fixes: 91df9fdf5149 ("x86/microcode/intel: Writeback and invalidate caches before updating microcode") Reported-by: Yan Hua Wu <[email protected]> Reported-by: William Xie <[email protected]> Signed-off-by: Chang S. Bae <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Ashok Raj <[email protected]> Tested-by: Yan Hua Wu <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6 |
|
| #
375a7564 |
| 24-Apr-2024 |
Tony Luck <[email protected]> |
x86/microcode/intel: Switch to new Intel CPU model defines
New CPU #defines encode vendor and family as well as model.
Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Dave Hansen <dav
x86/microcode/intel: Switch to new Intel CPU model defines
New CPU #defines encode vendor and family as well as model.
Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/all/20240424181513.41829-1-tony.luck%40intel.com
show more ...
|
|
Revision tags: v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5 |
|
| #
89b0f15f |
| 13-Feb-2024 |
Thomas Gleixner <[email protected]> |
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
Now that __num_cores_per_package and __num_threads_per_package are available, cpuinfo::x86_max_cores and the related math all over the place can b
x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
Now that __num_cores_per_package and __num_threads_per_package are available, cpuinfo::x86_max_cores and the related math all over the place can be replaced with the ready to consume data.
Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4 |
|
| #
9c21ea53 |
| 01-Dec-2023 |
Borislav Petkov (AMD) <[email protected]> |
x86/microcode/intel: Set new revision only after a successful update
This was meant to be done only when early microcode got updated successfully. Move it into the if-branch.
Also, make sure the cu
x86/microcode/intel: Set new revision only after a successful update
This was meant to be done only when early microcode got updated successfully. Move it into the if-branch.
Also, make sure the current revision is read unconditionally and only once.
Fixes: 080990aa3344 ("x86/microcode: Rework early revisions reporting") Reported-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Tested-by: Ashok Raj <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
1f693ef5 |
| 29-Nov-2023 |
Ashok Raj <[email protected]> |
x86/microcode/intel: Remove redundant microcode late updated message
After successful update, the late loading routine prints an update summary similar to:
microcode: load: updated on 128 primary
x86/microcode/intel: Remove redundant microcode late updated message
After successful update, the late loading routine prints an update summary similar to:
microcode: load: updated on 128 primary CPUs with 128 siblings microcode: revision: 0x21000170 -> 0x21000190
Remove the redundant message in the Intel side of the driver.
[ bp: Massage commit message. ]
Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.7-rc3, v6.7-rc2 |
|
| #
080990aa |
| 15-Nov-2023 |
Borislav Petkov (AMD) <[email protected]> |
x86/microcode: Rework early revisions reporting
The AMD side of the loader issues the microcode revision for each logical thread on the system, which can become really noisy on huge machines. And do
x86/microcode: Rework early revisions reporting
The AMD side of the loader issues the microcode revision for each logical thread on the system, which can become really noisy on huge machines. And doing that doesn't make a whole lot of sense - the microcode revision is already in /proc/cpuinfo.
So in case one is interested in the theoretical support of mixed silicon steppings on AMD, one can check there.
What is also missing on the AMD side - something which people have requested before - is showing the microcode revision the CPU had *before* the early update.
So abstract that up in the main code and have the BSP on each vendor provide those revision numbers.
Then, dump them only once on driver init.
On Intel, do not dump the patch date - it is not needed.
Reported-by: Linus Torvalds <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/CAHk-=wg=%[email protected]
show more ...
|
|
Revision tags: v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5 |
|
| #
cf5ab01c |
| 02-Oct-2023 |
Ashok Raj <[email protected]> |
x86/microcode/intel: Add a minimum required revision for late loading
In general users, don't have the necessary information to determine whether late loading of a new microcode version is safe and
x86/microcode/intel: Add a minimum required revision for late loading
In general users, don't have the necessary information to determine whether late loading of a new microcode version is safe and does not modify anything which the currently running kernel uses already, e.g. removal of CPUID bits or behavioural changes of MSRs.
To address this issue, Intel has added a "minimum required version" field to a previously reserved field in the microcode header. Microcode updates should only be applied if the current microcode version is equal to, or greater than this minimum required version.
Thomas made some suggestions on how meta-data in the microcode file could provide Linux with information to decide if the new microcode is suitable candidate for late loading. But even the "simpler" option requires a lot of metadata and corresponding kernel code to parse it, so the final suggestion was to add the 'minimum required version' field in the header.
When microcode changes visible features, microcode will set the minimum required version to its own revision which prevents late loading.
Old microcode blobs have the minimum revision field always set to 0, which indicates that there is no information and the kernel considers it unsafe.
This is a pure OS software mechanism. The hardware/firmware ignores this header field.
For early loading there is no restriction because OS visible features are enumerated after the early load and therefore a change has no effect.
The check is always enabled, but by default not enforced. It can be enforced via Kconfig or kernel command line.
If enforced, the kernel refuses to late load microcode with a minimum required version field which is zero or when the currently loaded microcode revision is smaller than the minimum required revision.
If not enforced the load happens independent of the revision check to stay compatible with the existing behaviour, but it influences the decision whether the kernel is tainted or not. If the check signals that the late load is safe, then the kernel is not tainted.
Early loading is not affected by this.
[ tglx: Massaged changelog and fixed up the implementation ]
Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
9407bda8 |
| 17-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode: Prepare for minimal revision check
Applying microcode late can be fatal for the running kernel when the update changes functionality which is in use already in a non-compatible way, e
x86/microcode: Prepare for minimal revision check
Applying microcode late can be fatal for the running kernel when the update changes functionality which is in use already in a non-compatible way, e.g. by removing a CPUID bit.
There is no way for admins which do not have access to the vendors deep technical support to decide whether late loading of such a microcode is safe or not.
Intel has added a new field to the microcode header which tells the minimal microcode revision which is required to be active in the CPU in order to be safe.
Provide infrastructure for handling this in the core code and a command line switch which allows to enforce it.
If the update is considered safe the kernel is not tainted and the annoying warning message not emitted. If it's enforced and the currently loaded microcode revision is not safe for late loading then the load is aborted.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
7eb314a2 |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode: Rendezvous and load in NMI
stop_machine() does not prevent the spin-waiting sibling from handling an NMI, which is obviously violating the whole concept of rendezvous.
Implement a st
x86/microcode: Rendezvous and load in NMI
stop_machine() does not prevent the spin-waiting sibling from handling an NMI, which is obviously violating the whole concept of rendezvous.
Implement a static branch right in the beginning of the NMI handler which is nopped out except when enabled by the late loading mechanism.
The late loader enables the static branch before stop_machine() is invoked. Each CPU has an nmi_enable in its control structure which indicates whether the CPU should go into the update routine.
This is required to bridge the gap between enabling the branch and actually being at the point where it is required to enter the loader wait loop.
Each CPU which arrives in the stopper thread function sets that flag and issues a self NMI right after that. If the NMI function sees the flag clear, it returns. If it's set it clears the flag and enters the rendezvous.
This is safe against a real NMI which hits in between setting the flag and sending the NMI to itself. The real NMI will be swallowed by the microcode update and the self NMI will then let stuff continue. Otherwise this would end up with a spurious NMI.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b7fcd995 |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Rework intel_find_matching_signature()
Take a cpu_signature argument and work from there. Move the match() helper next to the callsite as there is no point for having it in a he
x86/microcode/intel: Rework intel_find_matching_signature()
Take a cpu_signature argument and work from there. Move the match() helper next to the callsite as there is no point for having it in a header.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
11f96ac4 |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Reuse intel_cpu_collect_info()
No point for an almost duplicate function.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]
x86/microcode/intel: Reuse intel_cpu_collect_info()
No point for an almost duplicate function.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
164aa1ca |
| 17-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Rework intel_cpu_collect_info()
Nothing needs struct ucode_cpu_info. Make it take struct cpu_signature, let it return a boolean and simplify the implementation. Rename it now th
x86/microcode/intel: Rework intel_cpu_collect_info()
Nothing needs struct ucode_cpu_info. Make it take struct cpu_signature, let it return a boolean and simplify the implementation. Rename it now that the silly name clash with collect_cpu_info() is gone.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
3973718c |
| 17-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Unify microcode apply() functions
Deduplicate the early and late apply() functions.
[ bp: Rename the function which does the actual application to __apply_microcode() t
x86/microcode/intel: Unify microcode apply() functions
Deduplicate the early and late apply() functions.
[ bp: Rename the function which does the actual application to __apply_microcode() to differentiate it from microcode_ops.apply_microcode(). ]
Signed-off-by: Borislav Petkov (AMD) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
f24f2044 |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Switch to kvmalloc()
Microcode blobs are getting larger and might soon reach the kmalloc() limit. Switch over kvmalloc().
Signed-off-by: Thomas Gleixner <[email protected]> Si
x86/microcode/intel: Switch to kvmalloc()
Microcode blobs are getting larger and might soon reach the kmalloc() limit. Switch over kvmalloc().
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
2a1dada3 |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Save the microcode only after a successful late-load
There are situations where the late microcode is loaded into memory but is not applied:
1) The rendezvous fails 2) The
x86/microcode/intel: Save the microcode only after a successful late-load
There are situations where the late microcode is loaded into memory but is not applied:
1) The rendezvous fails 2) The microcode is rejected by the CPUs
If any of this happens then the pointer which was updated at firmware load time is stale and subsequent CPU hotplug operations either fail to update or create inconsistent microcode state.
Save the loaded microcode in a separate pointer before the late load is attempted and when successful, update the hotplug pointer accordingly via a new microcode_ops callback.
Remove the pointless fallback in the loader to a microcode pointer which is never populated.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
dd5e3e3c |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Simplify early loading
The early loading code is overly complicated:
- It scans the builtin/initrd for microcode not only on the BSP, but also on all APs during early boo
x86/microcode/intel: Simplify early loading
The early loading code is overly complicated:
- It scans the builtin/initrd for microcode not only on the BSP, but also on all APs during early boot and then later in the boot process it scans again to duplicate and save the microcode before initrd goes away.
That's a pointless exercise because this can be simply done before bringing up the APs when the memory allocator is up and running.
- Saving the microcode from within the scan loop is completely non-obvious and a left over of the microcode cache.
This can be done at the call site now which makes it obvious.
Rework the code so that only the BSP scans the builtin/initrd microcode once during early boot and save it away in an early initcall for later use.
[ bp: Test and fold in a fix from tglx ontop which handles the need to distinguish what save_microcode() does depending on when it is called:
- when on the BSP during early load, it needs to find a newer revision than the one currently loaded on the BSP
- later, before SMP init, it still runs on the BSP and gets the BSP revision just loaded and uses that revision to know which patch to save for the APs. For that it needs to find the exact one as on the BSP. ]
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
0177669e |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Cleanup code further
Sanitize the microcode scan loop, fixup printks and move the loading function for builtin microcode next to the place where it is used and mark it __init.
x86/microcode/intel: Cleanup code further
Sanitize the microcode scan loop, fixup printks and move the loading function for builtin microcode next to the place where it is used and mark it __init.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
6b072022 |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Simplify and rename generic_load_microcode()
so it becomes less obfuscated and rename it because there is nothing generic about it.
Signed-off-by: Thomas Gleixner <tglx@linutro
x86/microcode/intel: Simplify and rename generic_load_microcode()
so it becomes less obfuscated and rename it because there is nothing generic about it.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b0f0bf5e |
| 02-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Simplify scan_microcode()
Make it readable and comprehensible.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: htt
x86/microcode/intel: Simplify scan_microcode()
Make it readable and comprehensible.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
ae76d951 |
| 17-Oct-2023 |
Ashok Raj <[email protected]> |
x86/microcode/intel: Rip out mixed stepping support for Intel CPUs
Mixed steppings aren't supported on Intel CPUs. Only one microcode patch is required for the entire system. The caching of microcod
x86/microcode/intel: Rip out mixed stepping support for Intel CPUs
Mixed steppings aren't supported on Intel CPUs. Only one microcode patch is required for the entire system. The caching of microcode blobs which match the family and model is therefore pointless and in fact is dysfunctional as CPU hotplug updates use only a single microcode blob, i.e. the one where *intel_ucode_patch points to.
Remove the microcode cache and make it an AMD local feature.
[ tglx: - save only at the end. Otherwise random microcode ends up in the pointer for early loading - free the ucode patch pointer in save_microcode_patch() only after kmemdup() has succeeded, as reported by Andrew Cooper ]
Originally-by: Thomas Gleixner <[email protected]> Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
0b62f6cb |
| 17-Oct-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/32: Move early loading after paging enable
32-bit loads microcode before paging is enabled. The commit which introduced that has zero justification in the changelog. The cover letter h
x86/microcode/32: Move early loading after paging enable
32-bit loads microcode before paging is enabled. The commit which introduced that has zero justification in the changelog. The cover letter has slightly more content, but it does not give any technical justification either:
"The problem in current microcode loading method is that we load a microcode way, way too late; ideally we should load it before turning paging on. This may only be practical on 32 bits since we can't get to 64-bit mode without paging on, but we should still do it as early as at all possible."
Handwaving word salad with zero technical content.
Someone claimed in an offlist conversation that this is required for curing the ATOM erratum AAE44/AAF40/AAG38/AAH41. That erratum requires an microcode update in order to make the usage of PSE safe. But during early boot, PSE is completely irrelevant and it is evaluated way later.
Neither is it relevant for the AP on single core HT enabled CPUs as the microcode loading on the AP is not doing anything.
On dual core CPUs there is a theoretical problem if a split of an executable large page between enabling paging including PSE and loading the microcode happens. But that's only theoretical, it's practically irrelevant because the affected dual core CPUs are 64bit enabled and therefore have paging and PSE enabled before loading the microcode on the second core. So why would it work on 64-bit but not on 32-bit?
The erratum:
"AAG38 Code Fetch May Occur to Incorrect Address After a Large Page is Split Into 4-Kbyte Pages
Problem: If software clears the PS (page size) bit in a present PDE (page directory entry), that will cause linear addresses mapped through this PDE to use 4-KByte pages instead of using a large page after old TLB entries are invalidated. Due to this erratum, if a code fetch uses this PDE before the TLB entry for the large page is invalidated then it may fetch from a different physical address than specified by either the old large page translation or the new 4-KByte page translation. This erratum may also cause speculative code fetches from incorrect addresses."
The practical relevance for this is exactly zero because there is no splitting of large text pages during early boot-time, i.e. between paging enable and microcode loading, and neither during CPU hotplug.
IOW, this load microcode before paging enable is yet another voodoo programming solution in search of a problem. What's worse is that it causes at least two serious problems:
1) When stackprotector is enabled, the microcode loader code has the stackprotector mechanics enabled. The read from the per CPU variable __stack_chk_guard is always accessing the virtual address either directly on UP or via %fs on SMP. In physical address mode this results in an access to memory above 3GB. So this works by chance as the hardware returns the same value when there is no RAM at this physical address. When there is RAM populated above 3G then the read is by chance the same as nothing changes that memory during the very early boot stage. That's not necessarily true during runtime CPU hotplug.
2) When function tracing is enabled, the relevant microcode loader functions and the functions invoked from there will call into the tracing code and evaluate global and per CPU variables in physical address mode. What could potentially go wrong?
Cure this and move the microcode loading after the early paging enable, use the new temporary initrd mapping and remove the gunk in the microcode loader which is required to handle physical address mode.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6 |
|
| #
d2700f40 |
| 12-Aug-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Remove pointless mutex
There is no concurrency.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kerne
x86/microcode/intel: Remove pointless mutex
There is no concurrency.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
d44450c5 |
| 12-Aug-2023 |
Thomas Gleixner <[email protected]> |
x86/microcode/intel: Remove debug code
This is really of dubious value.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.ke
x86/microcode/intel: Remove debug code
This is really of dubious value.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|