|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12 |
|
| #
00199ed6 |
| 16-Nov-2024 |
Shrikanth Hegde <[email protected]> |
powerpc: Add preempt lazy support
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within 16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use
powerpc: Add preempt lazy support
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within 16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic entry/exit, add lazy check at exit to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for return to kernel.
Ran a few benchmarks and db workload on Power10. Performance is close to preempt=none/voluntary.
Since Powerpc systems can have large core count and large memory, preempt lazy is going to be helpful in avoiding soft lockup issues.
Reviewed-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Ankur Arora <[email protected]> Signed-off-by: Shrikanth Hegde <[email protected]> Signed-off-by: Madhavan Srinivasan <[email protected]> Link: https://patch.msgid.link/[email protected]
show more ...
|
|
Revision tags: v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4 |
|
| #
d65d411c |
| 25-Jul-2023 |
Valentin Schneider <[email protected]> |
treewide: context_tracking: Rename CONTEXT_* into CT_STATE_*
Context tracking state related symbols currently use a mix of the CONTEXT_ (e.g. CONTEXT_KERNEL) and CT_SATE_ (e.g. CT_STATE_MASK) prefix
treewide: context_tracking: Rename CONTEXT_* into CT_STATE_*
Context tracking state related symbols currently use a mix of the CONTEXT_ (e.g. CONTEXT_KERNEL) and CT_SATE_ (e.g. CT_STATE_MASK) prefixes.
Clean up the naming and make the ctx_state enum use the CT_STATE_ prefix.
Suggested-by: Frederic Weisbecker <[email protected]> Signed-off-by: Valentin Schneider <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Neeraj Upadhyay <[email protected]>
show more ...
|
| #
932562a6 |
| 15-Dec-2023 |
Kent Overstreet <[email protected]> |
rseq: Split out rseq.h from sched.h
We're trying to get sched.h down to more or less just types only, not code - rseq can live in its own header.
This helps us kill the dependency on preempt.h in s
rseq: Split out rseq.h from sched.h
We're trying to get sched.h down to more or less just types only, not code - rseq can live in its own header.
This helps us kill the dependency on preempt.h in sched.h.
Signed-off-by: Kent Overstreet <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2 |
|
| #
be286b86 |
| 10-May-2023 |
Rohan McLure <[email protected]> |
powerpc: Mark [h]ssr_valid accesses in check_return_regs_valid
Checks to see if the [H]SRR registers have been clobbered by (soft) NMI interrupts imply the possibility for a data race on the [h]srr_
powerpc: Mark [h]ssr_valid accesses in check_return_regs_valid
Checks to see if the [H]SRR registers have been clobbered by (soft) NMI interrupts imply the possibility for a data race on the [h]srr_valid entries in the PACA. Annotate accesses to these fields with READ_ONCE, removing the need for the barrier.
The diagnostic can use plain-access reads and writes, but annotate with data_race.
Signed-off-by: Rohan McLure <[email protected]> Reported-by: Michael Ellerman <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://msgid.link/[email protected]
show more ...
|
| #
0eb089a7 |
| 05-Jun-2023 |
Christophe Leroy <[email protected]> |
powerpc/interrupt: Don't read MSR from interrupt_exit_kernel_prepare()
A disassembly of interrupt_exit_kernel_prepare() shows a useless read of MSR register. This is shown by r9 being re-used immedi
powerpc/interrupt: Don't read MSR from interrupt_exit_kernel_prepare()
A disassembly of interrupt_exit_kernel_prepare() shows a useless read of MSR register. This is shown by r9 being re-used immediately without doing anything with the value read.
c000e0e0: 60 00 00 00 nop c000e0e4: 7d 3a c2 a6 mfmd_ap r9 c000e0e8: 7d 20 00 a6 mfmsr r9 c000e0ec: 7c 51 13 a6 mtspr 81,r2 c000e0f0: 81 3f 00 84 lwz r9,132(r31) c000e0f4: 71 29 80 00 andi. r9,r9,32768
This is due to the use of local_irq_save(). The flags read by local_irq_save() are never used, use local_irq_disable() instead.
Fixes: 13799748b957 ("powerpc/64: use interrupt restart table to speed up return from interrupt") Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://msgid.link/df36c6205ab64326fb1b991993c82057e92ace2f.1685955214.git.christophe.leroy@csgroup.eu
show more ...
|
|
Revision tags: v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6 |
|
| #
e5b6634a |
| 06-Apr-2023 |
Michael Ellerman <[email protected]> |
powerpc/irq: Mark check_return_regs_valid() notrace
check_return_regs_valid() is called from the middle of the irq exit handling, which is all notrace, so mark it notrace also.
Reported-by: Sachin
powerpc/irq: Mark check_return_regs_valid() notrace
check_return_regs_valid() is called from the middle of the irq exit handling, which is all notrace, so mark it notrace also.
Reported-by: Sachin Sant <[email protected]> Link: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Michael Ellerman <[email protected]> Link: https://msgid.link/[email protected]
show more ...
|
|
Revision tags: v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8 |
|
| #
2ea31e2e |
| 06-Feb-2023 |
Nicholas Piggin <[email protected]> |
powerpc/64s/interrupt: Fix interrupt exit race with security mitigation switch
The RFI and STF security mitigation options can flip the interrupt_exit_not_reentrant static branch condition concurren
powerpc/64s/interrupt: Fix interrupt exit race with security mitigation switch
The RFI and STF security mitigation options can flip the interrupt_exit_not_reentrant static branch condition concurrently with the interrupt exit code which tests that branch.
Interrupt exit tests this condition to set MSR[EE|RI] for exit, then again in the case a soft-masked interrupt is found pending, to recover the MSR so the interrupt can be replayed before attempting to exit again. If the condition changes between these two tests, the MSR and irq soft-mask state will become corrupted, leading to warnings and possible crashes. For example, if the branch is initially true then false, MSR[EE] will be 0 but PACA_IRQ_HARD_DIS clear and EE may not get enabled, leading to warnings in irq_64.c.
Fixes: 13799748b957 ("powerpc/64: use interrupt restart table to speed up return from interrupt") Cc: [email protected] # v5.14+ Reported-by: Sachin Sant <[email protected]> Tested-by: Sachin Sant <[email protected]> Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1 |
|
| #
dc398a08 |
| 06-Oct-2022 |
Nicholas Piggin <[email protected]> |
powerpc/64s/interrupt: Perf NMI should not take normal exit path
NMI interrupts should exit with EXCEPTION_RESTORE_REGS not with interrupt_return_srr, which is what the perf NMI handler currently do
powerpc/64s/interrupt: Perf NMI should not take normal exit path
NMI interrupts should exit with EXCEPTION_RESTORE_REGS not with interrupt_return_srr, which is what the perf NMI handler currently does. This breaks if a PMI hits after interrupt_exit_user_prepare_main() has switched the context tracking to user mode, then the CT_WARN_ON() in interrupt_exit_kernel_prepare() fires because it returns to kernel with context set to user.
This could possibly be solved by soft-disabling PMIs in the exit path, but that reduces our ability to profile that code. The warning could be removed, but it's potentially useful.
All other NMIs and soft-NMIs return using EXCEPTION_RESTORE_REGS, so this makes perf interrupts consistent with that and seems like the best fix.
Signed-off-by: Nicholas Piggin <[email protected]> [mpe: Squash in fixups from Nick] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
a073672e |
| 14-Oct-2022 |
Nicholas Piggin <[email protected]> |
powerpc/64/interrupt: Prevent NMI PMI causing a dangerous warning
NMI PMIs really should not return using the normal interrupt_return function. If such a PMI hits in code returning to user with the
powerpc/64/interrupt: Prevent NMI PMI causing a dangerous warning
NMI PMIs really should not return using the normal interrupt_return function. If such a PMI hits in code returning to user with the context switched to user mode, this warning can fire. This was enough to cause crashes when reproducing on 64s, because another perf interrupt would hit while reporting bug, and that would cause another bug, and so on until smashing the stack.
Work around that particular crash for now by just disabling that context warning for PMIs. This is a hack and not a complete fix, there could be other such problems lurking in corners. But it does fix the known crash.
Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0 |
|
| #
e485f6c7 |
| 26-Sep-2022 |
Nicholas Piggin <[email protected]> |
powerpc/64/interrupt: Fix return to masked context after hard-mask irq becomes pending
If a synchronous interrupt (e.g., hash fault) is taken inside an irqs-disabled region which has MSR[EE]=1, then
powerpc/64/interrupt: Fix return to masked context after hard-mask irq becomes pending
If a synchronous interrupt (e.g., hash fault) is taken inside an irqs-disabled region which has MSR[EE]=1, then an asynchronous interrupt that is PACA_IRQ_MUST_HARD_MASK (e.g., PMI) is taken inside the synchronous interrupt handler, then the synchronous interrupt will return with MSR[EE]=1 and the asynchronous interrupt fires again.
If the asynchronous interrupt is a PMI and the original context does not have PMIs disabled (only Linux IRQs), the asynchronous interrupt will fire despite having the PMI marked soft pending. This can confuse the perf code and cause warnings.
This patch changes the interrupt return so that irqs-disabled MSR[EE]=1 contexts will be returned to with MSR[EE]=0 if a PACA_IRQ_MUST_HARD_MASK interrupt has become pending in the meantime.
The longer explanation for what happens: 1. local_irq_disable() 2. Hash fault interrupt fires, do_hash_fault handler runs 3. interrupt_enter_prepare() sets IRQS_ALL_DISABLED 4. interrupt_enter_prepare() sets MSR[EE]=1 5. PMU interrupt fires, masked handler runs 6. Masked handler marks PMI pending 7. Masked handler returns with PACA_IRQ_HARD_DIS set, MSR[EE]=0 8. do_hash_fault interrupt return handler runs 9. interrupt_exit_kernel_prepare() clears PACA_IRQ_HARD_DIS 10. interrupt returns with MSR[EE]=1 11. PMU interrupt fires, perf handler runs
Fixes: 4423eb5ae32e ("powerpc/64/interrupt: make normal synchronous interrupts enable MSR[EE] if possible") Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5 |
|
| #
1547db7d |
| 01-Jul-2022 |
Xiu Jianfeng <[email protected]> |
powerpc: Move system_call_exception() to syscall.c
This is a lead-up patch to enable syscall stack randomization, which uses alloca() and makes the compiler add unconditional stack canaries on sysca
powerpc: Move system_call_exception() to syscall.c
This is a lead-up patch to enable syscall stack randomization, which uses alloca() and makes the compiler add unconditional stack canaries on syscall entry. In order to avoid triggering needless checks and slowing down the entry path, the feature needs to disable stack protector at the compilation unit level as there is no general way to control stack protector coverage with a function attribute.
So move system_call_exception() to syscall.c to avoid affecting other functions in interrupt.c.
Suggested-by: Michael Ellerman <[email protected]> Signed-off-by: Xiu Jianfeng <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7 |
|
| #
76222808 |
| 04-Mar-2022 |
Christophe Leroy <[email protected]> |
powerpc: Move C prototypes out of asm-prototypes.h
We originally added asm-prototypes.h in commit 42f5b4cacd78 ("powerpc: Introduce asm-prototypes.h"). It's purpose was for prototypes of C functions
powerpc: Move C prototypes out of asm-prototypes.h
We originally added asm-prototypes.h in commit 42f5b4cacd78 ("powerpc: Introduce asm-prototypes.h"). It's purpose was for prototypes of C functions that are only called from asm, in order to fix sparse warnings about missing prototypes.
A few months later Nick added a different use case in commit 4efca4ed05cb ("kbuild: modversions for EXPORT_SYMBOL() for asm") for C prototypes for exported asm functions. This is basically the inverse of our original usage.
Since then we've added various prototypes to asm-prototypes.h for both reasons, meaning we now need to unstitch it all.
Dispatch prototypes of C functions into relevant headers and keep only the prototypes for functions defined in assembly.
For the time being, leave prom_init() there because moving it into asm/prom.h or asm/setup.h conflicts with drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowrom.o This will be fixed later by untaggling asm/pci.h and asm/prom.h or by renaming the function in shadowrom.c
Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/62d46904eca74042097acf4cb12c175e3067f3d1.1646413435.git.christophe.leroy@csgroup.eu
show more ...
|
|
Revision tags: v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7 |
|
| #
937fb700 |
| 19-Oct-2021 |
Christophe Leroy <[email protected]> |
powerpc/kuap: Add kuap_lock()
Add kuap_lock() and call it when entering interrupts from user.
It is called kuap_lock() as it is similar to kuap_save_and_lock() without the save.
However book3s/32
powerpc/kuap: Add kuap_lock()
Add kuap_lock() and call it when entering interrupts from user.
It is called kuap_lock() as it is similar to kuap_save_and_lock() without the save.
However book3s/32 already have a kuap_lock(). Rename it kuap_lock_addr().
Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/4437e2deb9f6f549f7089d45e9c6f96a7e77905a.1634627931.git.christophe.leroy@csgroup.eu
show more ...
|
| #
526d4a4c |
| 19-Oct-2021 |
Christophe Leroy <[email protected]> |
powerpc/32s: Do kuep_lock() and kuep_unlock() in assembly
When interrupt and syscall entries where converted to C, KUEP locking and unlocking was also converted. It improved performance by unrolling
powerpc/32s: Do kuep_lock() and kuep_unlock() in assembly
When interrupt and syscall entries where converted to C, KUEP locking and unlocking was also converted. It improved performance by unrolling the loop, and allowed easily implementing boot time deactivation of KUEP.
However, null_syscall selftest shows that KUEP is still heavy (361 cycles with KUEP, 212 cycles without).
A way to improve more is to group 'mtsr's together, instead of repeating 'addi' + 'mtsr' several times.
In order to do that, more registers need to be available. In C, GCC will always be able to provide the requested number of registers, but at the cost of saving some data on the stack, which is counter performant here.
So let's do it in assembly, when we have full control of which register can be used. It also has the advantage of locking earlier and unlocking later and it helps GCC generating less tricky code. The only drawback is to make boot time deactivation less straight forward and require 'hand' instruction patching.
Group 'mtsr's by 4.
With this change, null_syscall selftest reports 336 cycles. Without the change it was 361 cycles, that's a 7% reduction.
Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/115cb279e9b9948dfd93a065e047081c59e3a2a6.1634627931.git.christophe.leroy@csgroup.eu
show more ...
|
| #
985faa78 |
| 29-Nov-2021 |
Mark Rutland <[email protected]> |
powerpc: Snapshot thread flags
Some thread flags can be set remotely, and so even when IRQs are disabled, the flags can change under our feet. Generally this is unlikely to cause a problem in practi
powerpc: Snapshot thread flags
Some thread flags can be set remotely, and so even when IRQs are disabled, the flags can change under our feet. Generally this is unlikely to cause a problem in practice, but it is somewhat unsound, and KCSAN will legitimately warn that there is a data race.
To avoid such issues, a snapshot of the flags has to be taken prior to using them. Some places already use READ_ONCE() for that, others do not.
Convert them all to the new flag accessor helpers.
Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
08b0af5b |
| 29-Nov-2021 |
Mark Rutland <[email protected]> |
powerpc: Avoid discarding flags in system_call_exception()
Some thread flags can be set remotely, and so even when IRQs are disabled, the flags can change under our feet. Thus, when setting flags we
powerpc: Avoid discarding flags in system_call_exception()
Some thread flags can be set remotely, and so even when IRQs are disabled, the flags can change under our feet. Thus, when setting flags we must use an atomic operation rather than a plain read-modify-write sequence, as a plain read-modify-write may discard flags which are concurrently set by a remote thread, e.g.
// task A // task B tmp = A->thread_info.flags; set_tsk_thread_flag(A, NEWFLAG_B); tmp |= NEWFLAG_A; A->thread_info.flags = tmp;
arch/powerpc/kernel/interrupt.c's system_call_exception() sets _TIF_RESTOREALL in the thread info flags with a read-modify-write, which may result in other flags being discarded.
Elsewhere in the file it uses clear_bits() to atomically remove flag bits, so use set_bits() here for consistency with those.
There may be reasons (e.g. instrumentation) that prevent the use of set_thread_flag() and clear_thread_flag() here, which would otherwise be preferable.
Fixes: ae7aaecc3f2f78b7 ("powerpc/64s: system call rfscv workaround for TM bugs") Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Eirik Fuller <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
4a5cb51f |
| 26-Oct-2021 |
Nicholas Piggin <[email protected]> |
powerpc/64s/interrupt: Fix check_return_regs_valid() false positive
The check_return_regs_valid() can cause a false positive if the return regs are marked as norestart and they are an HSRR type inte
powerpc/64s/interrupt: Fix check_return_regs_valid() false positive
The check_return_regs_valid() can cause a false positive if the return regs are marked as norestart and they are an HSRR type interrupt, because the low bit in the bottom of regs->trap causes interrupt type matching to fail.
This can occcur for example on bare metal with a HV privileged doorbell interrupt that causes a signal, but do_signal returns early because get_signal() fails, and takes the "No signal to deliver" path. In this case no signal was delivered so the return location is not changed so return SRRs are not invalidated, yet set_trap_norestart is called, which messes up the match. Building go-1.16.6 is known to reproduce this.
Fix it by using the TRAP() accessor which masks out the low bit.
Fixes: 6eaaf9de3599 ("powerpc/64s/interrupt: Check and fix srr_valid without crashing") Cc: [email protected] # v5.14+ Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1 |
|
| #
ae7aaecc |
| 08-Sep-2021 |
Nicholas Piggin <[email protected]> |
powerpc/64s: system call rfscv workaround for TM bugs
The rfscv instruction does not work correctly with the fake-suspend mode in POWER9, which can end up with the hypervisor restoring an incorrect
powerpc/64s: system call rfscv workaround for TM bugs
The rfscv instruction does not work correctly with the fake-suspend mode in POWER9, which can end up with the hypervisor restoring an incorrect checkpoint.
Work around this by setting the _TIF_RESTOREALL flag if a system call returns to a transaction active state, causing rfid to be used instead of rfscv to return, which will do the right thing. The contents of the registers are irrelevant because they will be overwritten in this case anyway.
Fixes: 7fa95f9adaee7 ("powerpc/64s: system call support for scv/rfscv instructions") Reported-by: Eirik Fuller <[email protected]> Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b871895b |
| 03-Sep-2021 |
Nicholas Piggin <[email protected]> |
powerpc/64s: system call scv tabort fix for corrupt irq soft-mask state
If a system call is made with a transaction active, the kernel immediately aborts it and returns. scv system calls disable irq
powerpc/64s: system call scv tabort fix for corrupt irq soft-mask state
If a system call is made with a transaction active, the kernel immediately aborts it and returns. scv system calls disable irqs even earlier in their interrupt handler, and tabort_syscall does not fix this up.
This can result in irq soft-mask state being messed up on the next kernel entry, and crashing at BUG_ON(arch_irq_disabled_regs(regs)) in the kernel exit handlers, or possibly worse.
This can't easily be fixed in asm because at this point an async irq may have hit, which is soft-masked and marked pending. The pending interrupt has to be replayed before returning to userspace. The fix is to move the tabort_syscall code to C in the main syscall handler, and just skip the system call but otherwise return as usual, which will take care of the pending irqs. This also does a bunch of other things including possible signal delivery to the process, but the doomed transaction should still be aborted when it is eventually returned to.
The sc system call path is changed to use the new C function as well to reduce code and path differences. This slows down how quickly system calls are aborted when called while a transaction is active, which could potentially impact TM performance. But making any system call is already bad for performance, and TM is on the way out, so go with simpler over faster.
Fixes: 7fa95f9adaee7 ("powerpc/64s: system call support for scv/rfscv instructions") Reported-by: Eirik Fuller <[email protected]> Signed-off-by: Nicholas Piggin <[email protected]> [mpe: Use #ifdef rather than IS_ENABLED() to fix build error on 32-bit] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.14 |
|
| #
806c0e6e |
| 23-Aug-2021 |
Christophe Leroy <[email protected]> |
powerpc: Refactor verification of MSR_RI
40x and BOOKE don't have MSR_RI therefore all tests involving MSR_RI may be problematic on those plateforms.
Create helpers to check or set MSR_RI in regs,
powerpc: Refactor verification of MSR_RI
40x and BOOKE don't have MSR_RI therefore all tests involving MSR_RI may be problematic on those plateforms.
Create helpers to check or set MSR_RI in regs, and use them in common code.
Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/c2fb93708196734f4176dda334aaa3055f213b89.1629707037.git.christophe.leroy@csgroup.eu
show more ...
|
| #
133c17a1 |
| 23-Aug-2021 |
Christophe Leroy <[email protected]> |
powerpc: Remove MSR_PR check in interrupt_exit_{user/kernel}_prepare()
In those hot functions that are called at every interrupt, any saved cycle is worth it.
interrupt_exit_user_prepare() and inte
powerpc: Remove MSR_PR check in interrupt_exit_{user/kernel}_prepare()
In those hot functions that are called at every interrupt, any saved cycle is worth it.
interrupt_exit_user_prepare() and interrupt_exit_kernel_prepare() are called from three places: - From entry_32.S - From interrupt_64.S - From interrupt_exit_user_restart() and interrupt_exit_kernel_restart()
In entry_32.S, there are inambiguously called based on MSR_PR:
interrupt_return: lwz r4,_MSR(r1) addi r3,r1,STACK_FRAME_OVERHEAD andi. r0,r4,MSR_PR beq .Lkernel_interrupt_return bl interrupt_exit_user_prepare ... .Lkernel_interrupt_return: bl interrupt_exit_kernel_prepare
In interrupt_64.S, that's similar:
interrupt_return_\srr\(): ld r4,_MSR(r1) andi. r0,r4,MSR_PR beq interrupt_return_\srr\()_kernel interrupt_return_\srr\()_user: /* make backtraces match the _kernel variant */ addi r3,r1,STACK_FRAME_OVERHEAD bl interrupt_exit_user_prepare ... interrupt_return_\srr\()_kernel: addi r3,r1,STACK_FRAME_OVERHEAD bl interrupt_exit_kernel_prepare
In interrupt_exit_user_restart() and interrupt_exit_kernel_restart(), MSR_PR is verified respectively by BUG_ON(!user_mode(regs)) and BUG_ON(user_mode(regs)) prior to calling interrupt_exit_user_prepare() and interrupt_exit_kernel_prepare().
The verification in interrupt_exit_user_prepare() and interrupt_exit_kernel_prepare() are therefore useless and can be removed.
Signed-off-by: Christophe Leroy <[email protected]> Acked-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/385ead49ccb66a259b25fee3eebf0bd4094068f3.1629707037.git.christophe.leroy@csgroup.eu
show more ...
|
|
Revision tags: v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5 |
|
| #
e225c4d6 |
| 23-Mar-2021 |
Wan Jiabing <[email protected]> |
powerpc: Remove duplicate includes
interrupt.c: asm/interrupt.h has been included at line 12, so remove the duplicate one at line 10.
time.c: linux/sched/clock.h has been included at line 33,so rem
powerpc: Remove duplicate includes
interrupt.c: asm/interrupt.h has been included at line 12, so remove the duplicate one at line 10.
time.c: linux/sched/clock.h has been included at line 33,so remove the duplicate one at line 56 and move sched/cputime.h under sched including segament.
Signed-off-by: Wan Jiabing <[email protected]> Reviewed-by: Daniel Axtens <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
9b69d48c |
| 30-Jun-2021 |
Nicholas Piggin <[email protected]> |
powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
The implicit soft-masking to speed up interrupt return was going to be used by 64e as well, but it has not been extensively
powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
The implicit soft-masking to speed up interrupt return was going to be used by 64e as well, but it has not been extensively tested on that platform and is not considered ready. It was intended to be disabled before merge. Disable it for now.
Most of the restart code is common with 64s, so with more correctness and performance testing this could be re-enabled again by adding the extra soft-mask checks to interrupt handlers and flipping exit_must_hard_disable().
Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked") Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b064037e |
| 25-Jun-2021 |
Christophe Leroy <[email protected]> |
powerpc/interrupt: Use names in check_return_regs_valid()
trap->regs == 0x3000 is trap_is_scv()
trap 0x500 is INTERRUPT_EXTERNAL
Signed-off-by: Christophe Leroy <[email protected]> Signe
powerpc/interrupt: Use names in check_return_regs_valid()
trap->regs == 0x3000 is trap_is_scv()
trap 0x500 is INTERRUPT_EXTERNAL
Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/d48bf0184a1de185eb0ed3282247f8a294710674.1624632537.git.christophe.leroy@csgroup.eu
show more ...
|
| #
767e6e71 |
| 25-Jun-2021 |
Christophe Leroy <[email protected]> |
powerpc/interrupt: Also use exit_must_hard_disable() on PPC32
Reduce #ifdefs a bit by making exit_must_hard_disable() return true on PPC32.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup
powerpc/interrupt: Also use exit_must_hard_disable() on PPC32
Reduce #ifdefs a bit by making exit_must_hard_disable() return true on PPC32.
Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/52531029563c1fc823b790058e799d0ca71b028c.1624631463.git.christophe.leroy@csgroup.eu
show more ...
|