|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2 |
|
| #
9cac324d |
| 05-Feb-2025 |
Geert Uytterhoeven <[email protected]> |
ARM: 9442/1: smp: Fix IPI alignment in /proc/interrupts
On a system with less than 1000 interrupts, prec = 3, causing a misalignment for the IPI interrupts. E.g. on Koelsch (R-Car M2-W):
200:
ARM: 9442/1: smp: Fix IPI alignment in /proc/interrupts
On a system with less than 1000 interrupts, prec = 3, causing a misalignment for the IPI interrupts. E.g. on Koelsch (R-Car M2-W):
200: 0 0 gpio-rcar 6 Edge SW36 IPI0: 0 0 CPU wakeup interrupts IPI1: 0 0 Timer broadcast interrupts IPI2: 1701 2844 Rescheduling interrupts IPI3: 10338 21181 Function call interrupts IPI4: 0 0 CPU stop interrupts IPI5: 651 825 IRQ work interrupts IPI6: 0 0 completion interrupts Err: 0
Fix this by adopting the same solution as used on arm64.
Signed-off-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
8d539b84 |
| 04-Aug-2023 |
Douglas Anderson <[email protected]> |
nmi_backtrace: allow excluding an arbitrary CPU
The APIs that allow backtracing across CPUs have always had a way to exclude the current CPU. This convenience means callers didn't need to find a pl
nmi_backtrace: allow excluding an arbitrary CPU
The APIs that allow backtracing across CPUs have always had a way to exclude the current CPU. This convenience means callers didn't need to find a place to allocate a CPU mask just to handle the common case.
Let's extend the API to take a CPU ID to exclude instead of just a boolean. This isn't any more complex for the API to handle and allows the hardlockup detector to exclude a different CPU (the one it already did a trace for) without needing to find space for a CPU mask.
Arguably, this new API also encourages safer behavior. Specifically if the caller wants to avoid tracing the current CPU (maybe because they already traced the current CPU) this makes it more obvious to the caller that they need to make sure that the current CPU ID can't change.
[[email protected]: fix trigger_allbutcpu_cpu_backtrace() stub] Link: https://lkml.kernel.org/r/20230804065935.v4.1.Ia35521b91fc781368945161d7b28538f9996c182@changeid Signed-off-by: Douglas Anderson <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: kernel test robot <[email protected]> Cc: Lecopzer Chen <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Pingfan Liu <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2 |
|
| #
5490e769 |
| 12-May-2023 |
Thomas Gleixner <[email protected]> |
ARM: smp: Switch to hotplug core state synchronization
Switch to the CPU hotplug core state tracking and synchronization mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner <tgl
ARM: smp: Switch to hotplug core state synchronization
Switch to the CPU hotplug core state tracking and synchronization mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Tested-by: Guilherme G. Piccoli <[email protected]> # Steam Deck Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc1, v6.3, v6.3-rc7 |
|
| #
7412a60d |
| 12-Apr-2023 |
Josh Poimboeuf <[email protected]> |
cpu: Mark panic_smp_self_stop() __noreturn
In preparation for improving objtool's handling of weak noreturn functions, mark panic_smp_self_stop() __noreturn.
Signed-off-by: Josh Poimboeuf <jpoimboe
cpu: Mark panic_smp_self_stop() __noreturn
In preparation for improving objtool's handling of weak noreturn functions, mark panic_smp_self_stop() __noreturn.
Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/92d76ab5c8bf660f04fdcd3da1084519212de248.1681342859.git.jpoimboe@kernel.org
show more ...
|
|
Revision tags: v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2 |
|
| #
4c8c3c7f |
| 07-Mar-2023 |
Valentin Schneider <[email protected]> |
treewide: Trace IPIs sent via smp_send_reschedule()
To be able to trace invocations of smp_send_reschedule(), rename the arch-specific definitions of it to arch_smp_send_reschedule() and wrap it int
treewide: Trace IPIs sent via smp_send_reschedule()
To be able to trace invocations of smp_send_reschedule(), rename the arch-specific definitions of it to arch_smp_send_reschedule() and wrap it into an smp_send_reschedule() that contains a tracepoint.
Changes to include the declaration of the tracepoint were driven by the following coccinelle script:
@func_use@ @@ smp_send_reschedule(...);
@include@ @@ #include <trace/events/ipi.h>
@no_include depends on func_use && !include@ @@ #include <...> + + #include <trace/events/ipi.h>
[csky bits] [riscv bits] Signed-off-by: Valentin Schneider <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Guo Ren <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
cc9cb0a7 |
| 07-Mar-2023 |
Valentin Schneider <[email protected]> |
sched, smp: Trace IPIs sent via send_call_function_single_ipi()
send_call_function_single_ipi() is the thing that sends IPIs at the bottom of smp_call_function*() via either generic_exec_single() or
sched, smp: Trace IPIs sent via send_call_function_single_ipi()
send_call_function_single_ipi() is the thing that sends IPIs at the bottom of smp_call_function*() via either generic_exec_single() or smp_call_function_many_cond(). Give it an IPI-related tracepoint.
Note that this ends up tracing any IPI sent via __smp_call_single_queue(), which covers __ttwu_queue_wakelist() and irq_work_queue_on() "for free".
Signed-off-by: Valentin Schneider <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Steven Rostedt (Google) <[email protected]> Acked-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.3-rc1, v6.2 |
|
| #
071c44e4 |
| 14-Feb-2023 |
Josh Poimboeuf <[email protected]> |
sched/idle: Mark arch_cpu_idle_dead() __noreturn
Before commit 076cbf5d2163 ("x86/xen: don't let xen_pv_play_dead() return"), in Xen, when a previously offlined CPU was brought back online, it unexp
sched/idle: Mark arch_cpu_idle_dead() __noreturn
Before commit 076cbf5d2163 ("x86/xen: don't let xen_pv_play_dead() return"), in Xen, when a previously offlined CPU was brought back online, it unexpectedly resumed execution where it left off in the middle of the idle loop.
There were some hacks to make that work, but the behavior was surprising as do_idle() doesn't expect an offlined CPU to return from the dead (in arch_cpu_idle_dead()).
Now that Xen has been fixed, and the arch-specific implementations of arch_cpu_idle_dead() also don't return, give it a __noreturn attribute.
This will cause the compiler to complain if an arch-specific implementation might return. It also improves code generation for both caller and callee.
Also fixes the following warning:
vmlinux.o: warning: objtool: do_idle+0x25f: unreachable instruction
Reported-by: Paul E. McKenney <[email protected]> Tested-by: Paul E. McKenney <[email protected]> Link: https://lore.kernel.org/r/60d527353da8c99d4cf13b6473131d46719ed16d.1676358308.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
| #
b40c7d6d |
| 16-Feb-2023 |
Josh Poimboeuf <[email protected]> |
arm/cpu: Add unreachable() to arch_cpu_idle_dead()
arch_cpu_idle_dead() doesn't return. Make that visible to the compiler with an unreachable() code annotation.
Link: https://lkml.kernel.org/r/202
arm/cpu: Add unreachable() to arch_cpu_idle_dead()
arch_cpu_idle_dead() doesn't return. Make that visible to the compiler with an unreachable() code annotation.
Link: https://lkml.kernel.org/r/20230216183851.s5bnvniomq44rytu@treble Signed-off-by: Josh Poimboeuf <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4 |
|
| #
08a56e07 |
| 12-Jan-2023 |
Peter Zijlstra <[email protected]> |
arm, smp: Remove trace_.*_rcuidle() usage
None of these functions should ever be ran with RCU disabled anymore.
Specifically, do_handle_IPI() is only called from handle_IPI() which explicitly does
arm, smp: Remove trace_.*_rcuidle() usage
None of these functions should ever be ran with RCU disabled anymore.
Specifically, do_handle_IPI() is only called from handle_IPI() which explicitly does irq_enter()/irq_exit() which ensures RCU is watching.
The problem with smp_cross_call() was, per commit description:
7c64cc0531fa ("arm: Use _rcuidle for smp_cross_call() tracepoints")
... that cpuidle_enter_state_coupled() already had RCU disabled, but that's long been fixed by commit:
1098582a0f6c ("sched,idle,rcu: Push rcu_idle deeper into the idle path")
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Tony Lindgren <[email protected]> Tested-by: Ulf Hansson <[email protected]> Reviewed-by: Ulf Hansson <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2 |
|
| #
8fc0b333 |
| 17-Oct-2022 |
Guilherme G. Piccoli <[email protected]> |
ARM: 9257/1: Disable FIQs (but not IRQs) on CPUs shutdown paths
Currently the regular CPU shutdown path for ARM disables IRQs/FIQs in the secondary CPUs - smp_send_stop() calls ipi_cpu_stop(), which
ARM: 9257/1: Disable FIQs (but not IRQs) on CPUs shutdown paths
Currently the regular CPU shutdown path for ARM disables IRQs/FIQs in the secondary CPUs - smp_send_stop() calls ipi_cpu_stop(), which is responsible for that. IRQs are architecturally masked when we take an interrupt, but FIQs are high priority than IRQs, hence they aren't masked. With that said, it makes sense to disable FIQs here, but there's no need for (re-)disabling IRQs.
More than that: there is an alternative path for disabling CPUs, in the form of function crash_smp_send_stop(), which is used for kexec/panic path. This function relies on a SMP call that also triggers a busy-wait loop [at machine_crash_nonpanic_core()], but without disabling FIQs. This might lead to odd scenarios, like early interrupts in the boot of kexec'd kernel or even interrupts in secondary "disabled" CPUs while the main one still works in the panic path and assumes all secondary CPUs are (really!) off.
So, let's disable FIQs in both paths and *not* disable IRQs a second time, since they are already masked in both paths by the architecture. This way, we keep both CPU quiesce paths consistent and safe.
Cc: Marc Zyngier <[email protected]> Cc: Michael Kelley <[email protected]> Signed-off-by: Guilherme G. Piccoli <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8 |
|
| #
787dbea1 |
| 21-Jul-2022 |
Ben Dooks <[email protected]> |
profile: setup_profiling_timer() is moslty not implemented
The setup_profiling_timer() is mostly un-implemented by many architectures. In many places it isn't guarded by CONFIG_PROFILE which is nee
profile: setup_profiling_timer() is moslty not implemented
The setup_profiling_timer() is mostly un-implemented by many architectures. In many places it isn't guarded by CONFIG_PROFILE which is needed for it to be used. Make it a weak symbol in kernel/profile.c and remove the 'return -EINVAL' implementations from the kenrel.
There are a couple of architectures which do return 0 from the setup_profiling_timer() function but they don't seem to do anything else with it. To keep the /proc compatibility for now, leave these for a future update or removal.
On ARM, this fixes the following sparse warning: arch/arm/kernel/smp.c:793:5: warning: symbol 'setup_profiling_timer' was not declared. Should it be static?
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ben Dooks <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2 |
|
| #
57a42043 |
| 24-Jan-2022 |
Ard Biesheuvel <[email protected]> |
ARM: drop pointless SMP check on secondary startup path
Only SMP systems use the secondary startup path by definition, so there is no need for SMP conditionals there.
Signed-off-by: Ard Biesheuvel
ARM: drop pointless SMP check on secondary startup path
Only SMP systems use the secondary startup path by definition, so there is no need for SMP conditionals there.
Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2 |
|
| #
4a2f57ac |
| 15-Nov-2021 |
Ard Biesheuvel <[email protected]> |
ARM: 9158/1: leave it to core code to manage thread_info::cpu
Since commit bcf9033e5449 ("sched: move CPU field back into thread_info if THREAD_INFO_IN_TASK=y"), the CPU field in thread_info went ba
ARM: 9158/1: leave it to core code to manage thread_info::cpu
Since commit bcf9033e5449 ("sched: move CPU field back into thread_info if THREAD_INFO_IN_TASK=y"), the CPU field in thread_info went back to being managed by the core code, so we no longer have to keep it in sync in arch code.
While at it, mark THREAD_INFO_IN_TASK as done for ARM in the documentation.
Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
| #
9c46929e |
| 24-Nov-2021 |
Ard Biesheuvel <[email protected]> |
ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we c
ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we can also enable THREAD_INFO_IN_TASK for those systems, as in that case, thread_info is accessed via current rather than the other way around, removing the need to store thread_info at the base of the task stack. This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP systems as well.
To partially mitigate the performance overhead of this arrangement, use a ADD/ADD/LDR sequence with the appropriate PC-relative group relocations to load the value of current when needed. This means that accessing current will still only require a single load as before, avoiding the need for a literal to carry the address of the global variable in each function. However, accessing thread_info will now require this load as well.
Acked-by: Linus Walleij <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Tested-by: Marc Zyngier <[email protected]> Tested-by: Vladimir Murzin <[email protected]> # ARMv7M
show more ...
|
|
Revision tags: v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5 |
|
| #
08572cd4 |
| 05-Oct-2021 |
Ard Biesheuvel <[email protected]> |
ARM: remove some dead code
This code appears to be no longer used so let's get rid of it.
Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Acked-by: Linus
ARM: remove some dead code
This code appears to be no longer used so let's get rid of it.
Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Acked-by: Linus Walleij <[email protected]> Tested-by: Keith Packard <[email protected]> Tested-by: Marc Zyngier <[email protected]> Tested-by: Vladimir Murzin <[email protected]> # ARMv7M
show more ...
|
|
Revision tags: v5.15-rc4, v5.15-rc3, v5.15-rc2 |
|
| #
18ed1c01 |
| 18-Sep-2021 |
Ard Biesheuvel <[email protected]> |
ARM: smp: Enable THREAD_INFO_IN_TASK
Now that we no longer rely on thread_info living at the base of the task stack to be able to access the 'current' pointer, we can wire up the generic support for
ARM: smp: Enable THREAD_INFO_IN_TASK
Now that we no longer rely on thread_info living at the base of the task stack to be able to access the 'current' pointer, we can wire up the generic support for moving thread_info into the task struct itself.
Note that this requires us to update the cpu field in thread_info explicitly, now that the core code no longer does so. Ideally, we would switch the percpu code to access the cpu field in task_struct instead, but this unleashes #include circular dependency hell.
Co-developed-by: Keith Packard <[email protected]> Signed-off-by: Keith Packard <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]>
show more ...
|
| #
50596b75 |
| 18-Sep-2021 |
Ard Biesheuvel <[email protected]> |
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while ru
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while running in the kernel. This removes the need to access it via thread_info, which is located at the base of the stack, but will be moved out of there in a subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will help GCC understand that reloading the value within the same function is not necessary, even when using the per-task stack protector (which also generates accesses via the TLS register). For example, the generated code below loads TPIDRURO only once, and uses it to access both the stack canary and the preempt_count fields.
<do_one_initcall>: e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr} ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3} 4606 mov r6, r0 b094 sub sp, #80 ; 0x50 f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary 9313 str r3, [sp, #76] ; 0x4c f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <[email protected]> Signed-off-by: Keith Packard <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]>
show more ...
|
| #
19f29aeb |
| 18-Sep-2021 |
Keith Packard <[email protected]> |
ARM: smp: Pass task to secondary_start_kernel
This avoids needing to compute the task pointer in this function, which will no longer be possible once we move thread_info off the stack.
Signed-off-b
ARM: smp: Pass task to secondary_start_kernel
This avoids needing to compute the task pointer in this function, which will no longer be possible once we move thread_info off the stack.
Signed-off-by: Keith Packard <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2 |
|
| #
85e3e7fb |
| 15-Jul-2021 |
John Ogness <[email protected]> |
printk: remove NMI tracking
All NMI contexts are handled the same as the safe context: store the message and defer printing. There is no need to have special NMI context tracking for this. Using in_
printk: remove NMI tracking
All NMI contexts are handled the same as the safe context: store the message and defer printing. There is no need to have special NMI context tracking for this. Using in_nmi() is enough.
There are several parts of the kernel that are manually calling into the printk NMI context tracking in order to cause general printk deferred printing:
arch/arm/kernel/smp.c arch/powerpc/kexec/crash.c kernel/trace/trace.c
For arm/kernel/smp.c and powerpc/kexec/crash.c, provide a new function pair printk_deferred_enter/exit that explicitly achieves the same objective.
For ftrace, remove the printk context manipulation completely. It was added in commit 03fc7f9c99c1 ("printk/nmi: Prevent deadlock when accessing the main log buffer in NMI"). The purpose was to enforce storing messages directly into the ring buffer even in NMI context. It really should have only modified the behavior in NMI context. There is no need for a special behavior any longer. All messages are always stored directly now. The console deferring is handled transparently in vprintk().
Signed-off-by: John Ogness <[email protected]> [[email protected]: Remove special handling in ftrace.c completely. Signed-off-by: Petr Mladek <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2 |
|
| #
f1a0a376 |
| 12-May-2021 |
Valentin Schneider <[email protected]> |
sched/core: Initialize the idle task with preemption disabled
As pointed out by commit
de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread")
init_idle() can and will be inv
sched/core: Initialize the idle task with preemption disabled
As pointed out by commit
de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread")
init_idle() can and will be invoked more than once on the same idle task. At boot time, it is invoked for the boot CPU thread by sched_init(). Then smp_init() creates the threads for all the secondary CPUs and invokes init_idle() on them.
As the hotplug machinery brings the secondaries to life, it will issue calls to idle_thread_get(), which itself invokes init_idle() yet again. In this case it's invoked twice more per secondary: at _cpu_up(), and at bringup_cpu().
Given smp_init() already initializes the idle tasks for all *possible* CPUs, no further initialization should be required. Now, removing init_idle() from idle_thread_get() exposes some interesting expectations with regards to the idle task's preempt_count: the secondary startup always issues a preempt_disable(), requiring some reset of the preempt count to 0 between hot-unplug and hotplug, which is currently served by idle_thread_get() -> idle_init().
Given the idle task is supposed to have preemption disabled once and never see it re-enabled, it seems that what we actually want is to initialize its preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove init_idle() from idle_thread_get().
Secondary startups were patched via coccinelle:
@begone@ @@
-preempt_disable(); ... cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
Signed-off-by: Valentin Schneider <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3 |
|
| #
a4b1b548 |
| 10-Jan-2021 |
Wolfram Sang (Renesas) <[email protected]> |
ARM: 9047/1: smp: remove unused variable
Not used anymore after refactoring:
arch/arm/kernel/smp.c: In function ‘show_ipi_list’: arch/arm/kernel/smp.c:543:16: warning: variable ‘irq’ set but not us
ARM: 9047/1: smp: remove unused variable
Not used anymore after refactoring:
arch/arm/kernel/smp.c: In function ‘show_ipi_list’: arch/arm/kernel/smp.c:543:16: warning: variable ‘irq’ set but not used [-Wunused-but-set-variable] 543 | unsigned int irq;
Fixes: 88c637748e31 ("ARM: smp: Use irq_desc_kstat_cpu() in show_ipi_list()") Signed-off-by: Wolfram Sang <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Marc Zyngier <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7 |
|
| #
27bde183 |
| 30-Nov-2020 |
Anshuman Khandual <[email protected]> |
ARM: 9033/1: arm/smp: Drop the macro S(x,s)
Mapping between IPI type index and its string is direct without requiring an additional offset. Hence the existing macro S(x, s) is now redundant and can
ARM: 9033/1: arm/smp: Drop the macro S(x,s)
Mapping between IPI type index and its string is direct without requiring an additional offset. Hence the existing macro S(x, s) is now redundant and can just be dropped. This also makes the code clean and simple.
Cc: Marc Zyngier <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
88c63774 |
| 10-Dec-2020 |
Thomas Gleixner <[email protected]> |
ARM: smp: Use irq_desc_kstat_cpu() in show_ipi_list()
The irq descriptor is already there, no need to look it up again.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Marc Zyngier <m
ARM: smp: Use irq_desc_kstat_cpu() in show_ipi_list()
The irq descriptor is already there, no need to look it up again.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7 |
|
| #
22038704 |
| 25-Sep-2020 |
Marc Zyngier <[email protected]> |
ARM: Handle no IPI being registered in show_ipi_list()
As SMP-on-UP is a valid configuration on 32bit ARM, do not assume that IPIs are populated in show_ipi_list().
Reported-by: Guillaume Tucker <g
ARM: Handle no IPI being registered in show_ipi_list()
As SMP-on-UP is a valid configuration on 32bit ARM, do not assume that IPIs are populated in show_ipi_list().
Reported-by: Guillaume Tucker <[email protected]> Reported-by: kernelci.org bot <[email protected]> Tested-by: Guillaume Tucker <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc6 |
|
| #
ac15a54e |
| 18-Sep-2020 |
Marc Zyngier <[email protected]> |
arm: Move ipi_teardown() to a CONFIG_HOTPLUG_CPU section
ipi_teardown() is only used when CONFIG_HOTPLUG_CPU is enabled. Move the function to a location guarded by this config option.
Signed-off-by
arm: Move ipi_teardown() to a CONFIG_HOTPLUG_CPU section
ipi_teardown() is only used when CONFIG_HOTPLUG_CPU is enabled. Move the function to a location guarded by this config option.
Signed-off-by: Marc Zyngier <[email protected]>
show more ...
|