History log of /linux-6.15/arch/x86/kernel/cpu/bugs.c (Results 1 – 25 of 322)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2
# facd226f 02-Dec-2024 Pawan Gupta <[email protected]>

x86/its: Add support for RSB stuffing mitigation

When retpoline mitigation is enabled for spectre-v2, enabling
call-depth-tracking and RSB stuffing also mitigates ITS. Add cmdline option
indirect_ta

x86/its: Add support for RSB stuffing mitigation

When retpoline mitigation is enabled for spectre-v2, enabling
call-depth-tracking and RSB stuffing also mitigates ITS. Add cmdline option
indirect_target_selection=stuff to allow enabling RSB stuffing mitigation.

When retpoline mitigation is not enabled, =stuff option is ignored, and
default mitigation for ITS is deployed.

Signed-off-by: Pawan Gupta <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Reviewed-by: Josh Poimboeuf <[email protected]>
Reviewed-by: Alexandre Chartre <[email protected]>

show more ...


Revision tags: v6.13-rc1
# 2665281a 18-Nov-2024 Pawan Gupta <[email protected]>

x86/its: Add "vmexit" option to skip mitigation on some CPUs

Ice Lake generation CPUs are not affected by guest/host isolation part of
ITS. If a user is only concerned about KVM guests, they can now

x86/its: Add "vmexit" option to skip mitigation on some CPUs

Ice Lake generation CPUs are not affected by guest/host isolation part of
ITS. If a user is only concerned about KVM guests, they can now choose a
new cmdline option "vmexit" that will not deploy the ITS mitigation when
CPU is not affected by guest/host isolation. This saves the performance
overhead of ITS mitigation on Ice Lake gen CPUs.

When "vmexit" option selected, if the CPU is affected by ITS guest/host
isolation, the default ITS mitigation is deployed.

Signed-off-by: Pawan Gupta <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Reviewed-by: Josh Poimboeuf <[email protected]>
Reviewed-by: Alexandre Chartre <[email protected]>

show more ...


Revision tags: v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5
# f4818881 22-Jun-2024 Pawan Gupta <[email protected]>

x86/its: Enable Indirect Target Selection mitigation

Indirect Target Selection (ITS) is a bug in some pre-ADL Intel CPUs with
eIBRS. It affects prediction of indirect branch and RETs in the
lower ha

x86/its: Enable Indirect Target Selection mitigation

Indirect Target Selection (ITS) is a bug in some pre-ADL Intel CPUs with
eIBRS. It affects prediction of indirect branch and RETs in the
lower half of cacheline. Due to ITS such branches may get wrongly predicted
to a target of (direct or indirect) branch that is located in the upper
half of the cacheline.

Scope of impact
===============

Guest/host isolation
--------------------
When eIBRS is used for guest/host isolation, the indirect branches in the
VMM may still be predicted with targets corresponding to branches in the
guest.

Intra-mode
----------
cBPF or other native gadgets can be used for intra-mode training and
disclosure using ITS.

User/kernel isolation
---------------------
When eIBRS is enabled user/kernel isolation is not impacted.

Indirect Branch Prediction Barrier (IBPB)
-----------------------------------------
After an IBPB, indirect branches may be predicted with targets
corresponding to direct branches which were executed prior to IBPB. This is
mitigated by a microcode update.

Add cmdline parameter indirect_target_selection=off|on|force to control the
mitigation to relocate the affected branches to an ITS-safe thunk i.e.
located in the upper half of cacheline. Also add the sysfs reporting.

When retpoline mitigation is deployed, ITS safe-thunks are not needed,
because retpoline sequence is already ITS-safe. Similarly, when call depth
tracking (CDT) mitigation is deployed (retbleed=stuff), ITS safe return
thunk is not used, as CDT prevents RSB-underflow.

To not overcomplicate things, ITS mitigation is not supported with
spectre-v2 lfence;jmp mitigation. Moreover, it is less practical to deploy
lfence;jmp mitigation on ITS affected parts anyways.

Signed-off-by: Pawan Gupta <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Reviewed-by: Josh Poimboeuf <[email protected]>
Reviewed-by: Alexandre Chartre <[email protected]>

show more ...


# 073fdbe0 05-May-2025 Pawan Gupta <[email protected]>

x86/bhi: Do not set BHI_DIS_S in 32-bit mode

With the possibility of intra-mode BHI via cBPF, complete mitigation for
BHI is to use IBHF (history fence) instruction with BHI_DIS_S set. Since
this ne

x86/bhi: Do not set BHI_DIS_S in 32-bit mode

With the possibility of intra-mode BHI via cBPF, complete mitigation for
BHI is to use IBHF (history fence) instruction with BHI_DIS_S set. Since
this new instruction is only available in 64-bit mode, setting BHI_DIS_S in
32-bit mode is only a partial mitigation.

Do not set BHI_DIS_S in 32-bit mode so as to avoid reporting misleading
mitigated status. With this change IBHF won't be used in 32-bit mode, also
remove the CONFIG_X86_64 check from emit_spectre_bhb_barrier().

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Reviewed-by: Josh Poimboeuf <[email protected]>
Reviewed-by: Alexandre Chartre <[email protected]>

show more ...


# 83f6665a 08-Apr-2025 Josh Poimboeuf <[email protected]>

x86/bugs: Add RSB mitigation document

Create a document to summarize hard-earned knowledge about RSB-related
mitigations, with references, and replace the overly verbose yet
incomplete comments with

x86/bugs: Add RSB mitigation document

Create a document to summarize hard-earned knowledge about RSB-related
mitigations, with references, and replace the overly verbose yet
incomplete comments with a reference to the document.

Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Link: https://lore.kernel.org/r/ab73f4659ba697a974759f07befd41ae605e33dd.1744148254.git.jpoimboe@kernel.org

show more ...


# 27ce8299 08-Apr-2025 Josh Poimboeuf <[email protected]>

x86/bugs: Don't fill RSB on context switch with eIBRS

User->user Spectre v2 attacks (including RSB) across context switches
are already mitigated by IBPB in cond_mitigation(), if enabled globally
or

x86/bugs: Don't fill RSB on context switch with eIBRS

User->user Spectre v2 attacks (including RSB) across context switches
are already mitigated by IBPB in cond_mitigation(), if enabled globally
or if either the prev or the next task has opted in to protection. RSB
filling without IBPB serves no purpose for protecting user space, as
indirect branches are still vulnerable.

User->kernel RSB attacks are mitigated by eIBRS. In which case the RSB
filling on context switch isn't needed, so remove it.

Suggested-by: Pawan Gupta <[email protected]>
Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Pawan Gupta <[email protected]>
Reviewed-by: Amit Shah <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
Link: https://lore.kernel.org/r/98cdefe42180358efebf78e3b80752850c7a3e1b.1744148254.git.jpoimboe@kernel.org

show more ...


# 18bae0df 08-Apr-2025 Josh Poimboeuf <[email protected]>

x86/bugs: Don't fill RSB on VMEXIT with eIBRS+retpoline

eIBRS protects against guest->host RSB underflow/poisoning attacks.
Adding retpoline to the mix doesn't change that. Retpoline has a
balanced

x86/bugs: Don't fill RSB on VMEXIT with eIBRS+retpoline

eIBRS protects against guest->host RSB underflow/poisoning attacks.
Adding retpoline to the mix doesn't change that. Retpoline has a
balanced CALL/RET anyway.

So the current full RSB filling on VMEXIT with eIBRS+retpoline is
overkill. Disable it or do the VMEXIT_LITE mitigation if needed.

Suggested-by: Pawan Gupta <[email protected]>
Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Pawan Gupta <[email protected]>
Reviewed-by: Amit Shah <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Woodhouse <[email protected]>
Link: https://lore.kernel.org/r/84a1226e5c9e2698eae1b5ade861f1b8bf3677dc.1744148254.git.jpoimboe@kernel.org

show more ...


# b1b19cfc 08-Apr-2025 Josh Poimboeuf <[email protected]>

x86/bugs: Fix RSB clearing in indirect_branch_prediction_barrier()

IBPB is expected to clear the RSB. However, if X86_BUG_IBPB_NO_RET is
set, that doesn't happen. Make indirect_branch_prediction_b

x86/bugs: Fix RSB clearing in indirect_branch_prediction_barrier()

IBPB is expected to clear the RSB. However, if X86_BUG_IBPB_NO_RET is
set, that doesn't happen. Make indirect_branch_prediction_barrier()
take that into account by calling write_ibpb() which clears RSB on
X86_BUG_IBPB_NO_RET:

/* Make sure IBPB clears return stack preductions too. */
FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET

Note that, as of the previous patch, write_ibpb() also reads
'x86_pred_cmd' in order to use SBPB when applicable:

movl _ASM_RIP(x86_pred_cmd), %eax

Therefore that existing behavior in indirect_branch_prediction_barrier()
is not lost.

Fixes: 50e4b3b94090 ("x86/entry: Have entry_ibpb() invalidate return predictions")
Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
Link: https://lore.kernel.org/r/bba68888c511743d4cd65564d1fc41438907523f.1744148254.git.jpoimboe@kernel.org

show more ...


# 13235d6d 08-Apr-2025 Josh Poimboeuf <[email protected]>

x86/bugs: Rename entry_ibpb() to write_ibpb()

There's nothing entry-specific about entry_ibpb(). In preparation for
calling it from elsewhere, rename it to write_ibpb().

Signed-off-by: Josh Poimbo

x86/bugs: Rename entry_ibpb() to write_ibpb()

There's nothing entry-specific about entry_ibpb(). In preparation for
calling it from elsewhere, rename it to write_ibpb().

Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Link: https://lore.kernel.org/r/1e54ace131e79b760de3fe828264e26d0896e3ac.1744148254.git.jpoimboe@kernel.org

show more ...


# 98fdaeb2 31-Oct-2024 Breno Leitao <[email protected]>

x86/bugs: Make spectre user default depend on MITIGATION_SPECTRE_V2

Change the default value of spectre v2 in user mode to respect the
CONFIG_MITIGATION_SPECTRE_V2 config option.

Currently, user mo

x86/bugs: Make spectre user default depend on MITIGATION_SPECTRE_V2

Change the default value of spectre v2 in user mode to respect the
CONFIG_MITIGATION_SPECTRE_V2 config option.

Currently, user mode spectre v2 is set to auto
(SPECTRE_V2_USER_CMD_AUTO) by default, even if
CONFIG_MITIGATION_SPECTRE_V2 is disabled.

Set the spectre_v2 value to auto (SPECTRE_V2_USER_CMD_AUTO) if the
Spectre v2 config (CONFIG_MITIGATION_SPECTRE_V2) is enabled, otherwise
set the value to none (SPECTRE_V2_USER_CMD_NONE).

Important to say the command line argument "spectre_v2_user" overwrites
the default value in both cases.

When CONFIG_MITIGATION_SPECTRE_V2 is not set, users have the flexibility
to opt-in for specific mitigations independently. In this scenario,
setting spectre_v2= will not enable spectre_v2_user=, and command line
options spectre_v2_user and spectre_v2 are independent when
CONFIG_MITIGATION_SPECTRE_V2=n.

Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Pawan Gupta <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: David Kaplan <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 2a08b832 31-Oct-2024 Breno Leitao <[email protected]>

x86/bugs: Use the cpu_smt_possible() helper instead of open-coded code

There is a helper function to check if SMT is available. Use this helper
instead of performing the check manually.

The helper

x86/bugs: Use the cpu_smt_possible() helper instead of open-coded code

There is a helper function to check if SMT is available. Use this helper
instead of performing the check manually.

The helper function cpu_smt_possible() does exactly the same thing as
was being done manually inside spectre_v2_user_select_mitigation().
Specifically, it returns false if CONFIG_SMP is disabled, otherwise
it checks the cpu_smt_control global variable.

This change improves code consistency and reduces duplication.

No change in functionality intended.

Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Pawan Gupta <[email protected]>
Cc: David Kaplan <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# b8ce25df 08-Jan-2025 David Kaplan <[email protected]>

x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds

Add AUTO mitigations for mds/taa/mmio/rfds to create consistent vulnerability
handling. These AUTO mitigations will be turned into the appropria

x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds

Add AUTO mitigations for mds/taa/mmio/rfds to create consistent vulnerability
handling. These AUTO mitigations will be turned into the appropriate default
mitigations in the <vuln>_select_mitigation() functions. Later, these will be
used with the new attack vector controls to help select appropriate
mitigations.

Signed-off-by: David Kaplan <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 2c93762e 08-Jan-2025 David Kaplan <[email protected]>

x86/bugs: Relocate mds/taa/mmio/rfds defines

Move the mds, taa, mmio, and rfds mitigation enums earlier in the file to
prepare for restructuring of these mitigations as they are all inter-related.

x86/bugs: Relocate mds/taa/mmio/rfds defines

Move the mds, taa, mmio, and rfds mitigation enums earlier in the file to
prepare for restructuring of these mitigations as they are all inter-related.

No functional change.

Signed-off-by: David Kaplan <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 8f64eee7 27-Feb-2025 Yosry Ahmed <[email protected]>

x86/bugs: Remove X86_FEATURE_USE_IBPB

X86_FEATURE_USE_IBPB was introduced in:

2961298efe1e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags")

to have separate flags for when the CPU su

x86/bugs: Remove X86_FEATURE_USE_IBPB

X86_FEATURE_USE_IBPB was introduced in:

2961298efe1e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags")

to have separate flags for when the CPU supports IBPB (i.e. X86_FEATURE_IBPB)
and when an IBPB is actually used to mitigate Spectre v2.

Ever since then, the uses of IBPB expanded. The name became confusing
because it does not control all IBPB executions in the kernel.
Furthermore, because its name is generic and it's buried within
indirect_branch_prediction_barrier(), it's easy to use it not knowing
that it is specific to Spectre v2.

X86_FEATURE_USE_IBPB is no longer needed because all the IBPB executions
it used to control are now controlled through other means (e.g.
switch_mm_*_ibpb static branches).

Remove the unused feature bit.

Signed-off-by: Yosry Ahmed <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 80dacb08 27-Feb-2025 Yosry Ahmed <[email protected]>

x86/bugs: Use a static branch to guard IBPB on vCPU switch

Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in KVM
when a new vCPU is loaded, introduce a static branch, similar to
s

x86/bugs: Use a static branch to guard IBPB on vCPU switch

Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in KVM
when a new vCPU is loaded, introduce a static branch, similar to
switch_mm_*_ibpb.

This makes it obvious in spectre_v2_user_select_mitigation() what
exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
(which will be shortly removed). It also provides more fine-grained
control, making it simpler to change/add paths that control the IBPB in
the vCPU switch path without affecting other IBPBs.

Signed-off-by: Yosry Ahmed <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# bd9a8542 27-Feb-2025 Yosry Ahmed <[email protected]>

x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set()

If X86_FEATURE_USE_IBPB is not set, then both spectre_v2_user_ibpb and
spectre_v2_user_stibp are set to SPECTRE_V2_USER_NONE in
spec

x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set()

If X86_FEATURE_USE_IBPB is not set, then both spectre_v2_user_ibpb and
spectre_v2_user_stibp are set to SPECTRE_V2_USER_NONE in
spectre_v2_user_select_mitigation(). Since ib_prctl_set() already checks
for this before performing the IBPB, the X86_FEATURE_USE_IBPB check is
redundant. Remove it.

Signed-off-by: Yosry Ahmed <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 549435aa 27-Feb-2025 Yosry Ahmed <[email protected]>

x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers

indirect_branch_prediction_barrier() only performs the MSR write if
X86_FEATURE_USE_IBPB is set, using alternative_msr_write(). In
preparat

x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers

indirect_branch_prediction_barrier() only performs the MSR write if
X86_FEATURE_USE_IBPB is set, using alternative_msr_write(). In
preparation for removing X86_FEATURE_USE_IBPB, move the feature check
into the callers so that they can be addressed one-by-one, and use
X86_FEATURE_IBPB instead to guard the MSR write.

Signed-off-by: Yosry Ahmed <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Acked-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 8442df2b 18-Feb-2025 Borislav Petkov <[email protected]>

x86/bugs: KVM: Add support for SRSO_MSR_FIX

Add support for

CPUID Fn8000_0021_EAX[31] (SRSO_MSR_FIX). If this bit is 1, it
indicates that software may use MSR BP_CFG[BpSpecReduce] to mitigate

x86/bugs: KVM: Add support for SRSO_MSR_FIX

Add support for

CPUID Fn8000_0021_EAX[31] (SRSO_MSR_FIX). If this bit is 1, it
indicates that software may use MSR BP_CFG[BpSpecReduce] to mitigate
SRSO.

Enable BpSpecReduce to mitigate SRSO across guest/host boundaries.

Switch back to enabling the bit when virtualization is enabled and to
clear the bit when virtualization is disabled because using a MSR slot
would clear the bit when the guest is exited and any training the guest
has done, would potentially influence the host kernel when execution
enters the kernel and hasn't VMRUN the guest yet.

More detail on the public thread in Link below.

Co-developed-by: Sean Christopherson <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 318e8c33 05-Feb-2025 Patrick Bellasi <[email protected]>

x86/cpu/kvm: SRSO: Fix possible missing IBPB on VM-Exit

In [1] the meaning of the synthetic IBPB flags has been redefined for a
better separation of concerns:
- ENTRY_IBPB -- issue IBPB on entr

x86/cpu/kvm: SRSO: Fix possible missing IBPB on VM-Exit

In [1] the meaning of the synthetic IBPB flags has been redefined for a
better separation of concerns:
- ENTRY_IBPB -- issue IBPB on entry only
- IBPB_ON_VMEXIT -- issue IBPB on VM-Exit only
and the Retbleed mitigations have been updated to match this new
semantics.

Commit [2] was merged shortly before [1], and their interaction was not
handled properly. This resulted in IBPB not being triggered on VM-Exit
in all SRSO mitigation configs requesting an IBPB there.

Specifically, an IBPB on VM-Exit is triggered only when
X86_FEATURE_IBPB_ON_VMEXIT is set. However:

- X86_FEATURE_IBPB_ON_VMEXIT is not set for "spec_rstack_overflow=ibpb",
because before [1] having X86_FEATURE_ENTRY_IBPB was enough. Hence,
an IBPB is triggered on entry but the expected IBPB on VM-exit is
not.

- X86_FEATURE_IBPB_ON_VMEXIT is not set also when
"spec_rstack_overflow=ibpb-vmexit" if X86_FEATURE_ENTRY_IBPB is
already set.

That's because before [1] this was effectively redundant. Hence, e.g.
a "retbleed=ibpb spec_rstack_overflow=bpb-vmexit" config mistakenly
reports the machine still vulnerable to SRSO, despite an IBPB being
triggered both on entry and VM-Exit, because of the Retbleed selected
mitigation config.

- UNTRAIN_RET_VM won't still actually do anything unless
CONFIG_MITIGATION_IBPB_ENTRY is set.

For "spec_rstack_overflow=ibpb", enable IBPB on both entry and VM-Exit
and clear X86_FEATURE_RSB_VMEXIT which is made superfluous by
X86_FEATURE_IBPB_ON_VMEXIT. This effectively makes this mitigation
option similar to the one for 'retbleed=ibpb', thus re-order the code
for the RETBLEED_MITIGATION_IBPB option to be less confusing by having
all features enabling before the disabling of the not needed ones.

For "spec_rstack_overflow=ibpb-vmexit", guard this mitigation setting
with CONFIG_MITIGATION_IBPB_ENTRY to ensure UNTRAIN_RET_VM sequence is
effectively compiled in. Drop instead the CONFIG_MITIGATION_SRSO guard,
since none of the SRSO compile cruft is required in this configuration.
Also, check only that the required microcode is present to effectively
enabled the IBPB on VM-Exit.

Finally, update the KConfig description for CONFIG_MITIGATION_IBPB_ENTRY
to list also all SRSO config settings enabled by this guard.

Fixes: 864bcaa38ee4 ("x86/cpu/kvm: Provide UNTRAIN_RET_VM") [1]
Fixes: d893832d0e1e ("x86/srso: Add IBPB on VMEXIT") [2]
Reported-by: Yosry Ahmed <[email protected]>
Signed-off-by: Patrick Bellasi <[email protected]>
Reviewed-by: Borislav Petkov (AMD) <[email protected]>
Cc: [email protected]
Signed-off-by: Linus Torvalds <[email protected]>

show more ...


# 87781880 11-Nov-2024 Borislav Petkov (AMD) <[email protected]>

x86/bugs: Add SRSO_USER_KERNEL_NO support

If the machine has:

CPUID Fn8000_0021_EAX[30] (SRSO_USER_KERNEL_NO) -- If this bit is 1,
it indicates the CPU is not subject to the SRSO vulnerability

x86/bugs: Add SRSO_USER_KERNEL_NO support

If the machine has:

CPUID Fn8000_0021_EAX[30] (SRSO_USER_KERNEL_NO) -- If this bit is 1,
it indicates the CPU is not subject to the SRSO vulnerability across
user/kernel boundaries.

have it fall back to IBPB on VMEXIT only, in the case it is going to run
VMs:

Speculative Return Stack Overflow: Mitigation: IBPB on VMEXIT only

Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# c62fa117 08-Oct-2024 Johannes Wikner <[email protected]>

x86/bugs: Do not use UNTRAIN_RET with IBPB on entry

Since X86_FEATURE_ENTRY_IBPB will invalidate all harmful predictions
with IBPB, no software-based untraining of returns is needed anymore.
Current

x86/bugs: Do not use UNTRAIN_RET with IBPB on entry

Since X86_FEATURE_ENTRY_IBPB will invalidate all harmful predictions
with IBPB, no software-based untraining of returns is needed anymore.
Currently, this change affects retbleed and SRSO mitigations so if
either of the mitigations is doing IBPB and the other one does the
software sequence, the latter is not needed anymore.

[ bp: Massage commit message. ]

Suggested-by: Borislav Petkov <[email protected]>
Signed-off-by: Johannes Wikner <[email protected]>
Cc: <[email protected]>

show more ...


# 0fad2878 08-Oct-2024 Johannes Wikner <[email protected]>

x86/bugs: Skip RSB fill at VMEXIT

entry_ibpb() is designed to follow Intel's IBPB specification regardless
of CPU. This includes invalidating RSB entries.

Hence, if IBPB on VMEXIT has been selected

x86/bugs: Skip RSB fill at VMEXIT

entry_ibpb() is designed to follow Intel's IBPB specification regardless
of CPU. This includes invalidating RSB entries.

Hence, if IBPB on VMEXIT has been selected, entry_ibpb() as part of the
RET untraining in the VMEXIT path will take care of all BTB and RSB
clearing so there's no need to explicitly fill the RSB anymore.

[ bp: Massage commit message. ]

Suggested-by: Borislav Petkov <[email protected]>
Signed-off-by: Johannes Wikner <[email protected]>
Cc: <[email protected]>

show more ...


# 1dbb6b14 04-Sep-2024 David Kaplan <[email protected]>

x86/bugs: Fix handling when SRSO mitigation is disabled

When the SRSO mitigation is disabled, either via mitigations=off or
spec_rstack_overflow=off, the warning about the lack of IBPB-enhancing
mic

x86/bugs: Fix handling when SRSO mitigation is disabled

When the SRSO mitigation is disabled, either via mitigations=off or
spec_rstack_overflow=off, the warning about the lack of IBPB-enhancing
microcode is printed anyway.

This is unnecessary since the user has turned off the mitigation.

[ bp: Massage, drop SBPB rationale as it doesn't matter because when
mitigations are disabled x86_pred_cmd is not being used anyway. ]

Signed-off-by: David Kaplan <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 225f2bd0 29-Jul-2024 Breno Leitao <[email protected]>

x86/bugs: Add a separate config for GDS

Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some
mitigations have entries in Kconfig, and they could be modified, while others
m

x86/bugs: Add a separate config for GDS

Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some
mitigations have entries in Kconfig, and they could be modified, while others
mitigations do not have Kconfig entries, and could not be controlled at build
time.

Create a new kernel config that allows GDS to be completely disabled,
similarly to the "gather_data_sampling=off" or "mitigations=off" kernel
command-line.

Now, there are two options for GDS mitigation:

* CONFIG_MITIGATION_GDS=n -> Mitigation disabled (New)
* CONFIG_MITIGATION_GDS=y -> Mitigation enabled (GDS_MITIGATION_FULL)

Suggested-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 03267a53 29-Jul-2024 Breno Leitao <[email protected]>

x86/bugs: Remove GDS Force Kconfig option

Remove the MITIGATION_GDS_FORCE Kconfig option, which aggressively disables
AVX as a mitigation for Gather Data Sampling (GDS) vulnerabilities. This
option

x86/bugs: Remove GDS Force Kconfig option

Remove the MITIGATION_GDS_FORCE Kconfig option, which aggressively disables
AVX as a mitigation for Gather Data Sampling (GDS) vulnerabilities. This
option is not widely used by distros.

While removing the Kconfig option, retain the runtime configuration ability
through the `gather_data_sampling=force` kernel parameter. This allows users
to still enable this aggressive mitigation if needed, without baking it into
the kernel configuration.

Simplify the kernel configuration while maintaining flexibility for runtime
mitigation choices.

Suggested-by: Borislav Petkov <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Daniel Sneddon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


12345678910>>...13