|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13 |
|
| #
d6104733 |
| 14-Jan-2025 |
Christian Marangi <[email protected]> |
spinlock: extend guard with spinlock_bh variants
Extend guard APIs with missing raw/spinlock_bh variants.
Signed-off-by: Christian Marangi <[email protected]> Acked-by: Peter Zijlstra (Intel) <p
spinlock: extend guard with spinlock_bh variants
Extend guard APIs with missing raw/spinlock_bh variants.
Signed-off-by: Christian Marangi <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2 |
|
| #
c793a628 |
| 28-May-2024 |
Sean Christopherson <[email protected]> |
sched/core: Drop spinlocks on contention iff kernel is preemptible
Use preempt_model_preemptible() to detect a preemptible kernel when deciding whether or not to reschedule in order to drop a conten
sched/core: Drop spinlocks on contention iff kernel is preemptible
Use preempt_model_preemptible() to detect a preemptible kernel when deciding whether or not to reschedule in order to drop a contended spinlock or rwlock. Because PREEMPT_DYNAMIC selects PREEMPTION, kernels built with PREEMPT_DYNAMIC=y will yield contended locks even if the live preemption model is "none" or "voluntary". In short, make kernels with dynamically selected models behave the same as kernels with statically selected models.
Somewhat counter-intuitively, NOT yielding a lock can provide better latency for the relevant tasks/processes. E.g. KVM x86's mmu_lock, a rwlock, is often contended between an invalidation event (takes mmu_lock for write) and a vCPU servicing a guest page fault (takes mmu_lock for read). For _some_ setups, letting the invalidation task complete even if there is mmu_lock contention provides lower latency for *all* tasks, i.e. the invalidation completes sooner *and* the vCPU services the guest page fault sooner.
But even KVM's mmu_lock behavior isn't uniform, e.g. the "best" behavior can vary depending on the host VMM, the guest workload, the number of vCPUs, the number of pCPUs in the host, why there is lock contention, etc.
In other words, simply deleting the CONFIG_PREEMPTION guard (or doing the opposite and removing contention yielding entirely) needs to come with a big pile of data proving that changing the status quo is a net positive.
Opportunistically document this side effect of preempt=full, as yielding contended spinlocks can have significant, user-visible impact.
Fixes: c597bfddc9e9 ("sched: Provide Kconfig support for default dynamic preempt mode") Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Ankur Arora <[email protected]> Reviewed-by: Chen Yu <[email protected]> Link: https://lore.kernel.org/kvm/[email protected]
show more ...
|
|
Revision tags: v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1 |
|
| #
5f4c01f1 |
| 15-Jan-2024 |
Leonardo Bras <[email protected]> |
spinlock: Fix failing build for PREEMPT_RT
Since 1d71b30e1f85 ("sched.h: Move (spin|rwlock)_needbreak() to spinlock.h") build fails for PREEMPT_RT, since there is no definition available of either s
spinlock: Fix failing build for PREEMPT_RT
Since 1d71b30e1f85 ("sched.h: Move (spin|rwlock)_needbreak() to spinlock.h") build fails for PREEMPT_RT, since there is no definition available of either spin_needbreak() and rwlock_needbreak().
Since it was moved on the mentioned commit, it was placed inside a !PREEMPT_RT part of the code, making it out of reach for an RT kernel.
Fix this by moving code it a few lines down so it can be reached by an RT build, where it can also make use of the *_is_contended() definition added by the spinlock_rt.h.
Fixes: d1d71b30e1f85 ("sched.h: Move (spin|rwlock)_needbreak() to spinlock.h") Signed-off-by: Leonardo Bras <[email protected]> Signed-off-by: Kent Overstreet <[email protected]> Acked-by: Waiman Long <[email protected]>
show more ...
|
|
Revision tags: v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6 |
|
| #
d1d71b30 |
| 11-Dec-2023 |
Kent Overstreet <[email protected]> |
sched.h: Move (spin|rwlock)_needbreak() to spinlock.h
This lets us kill the dependency on spinlock.h.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
Revision tags: v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2 |
|
| #
5431fdd2 |
| 17-Sep-2023 |
Peter Zijlstra <[email protected]> |
ptrace: Convert ptrace_attach() to use lock guards
Created as testing for the conditional guard infrastructure. Specifically this makes use of the following form:
scoped_cond_guard (mutex_intr, r
ptrace: Convert ptrace_attach() to use lock guards
Created as testing for the conditional guard infrastructure. Specifically this makes use of the following form:
scoped_cond_guard (mutex_intr, return -ERESTARTNOINTR, &task->signal->cred_guard_mutex) { ... } ... return 0;
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Oleg Nesterov <[email protected]> Link: https://lkml.kernel.org/r/20231102110706.568467727%40infradead.org
show more ...
|
| #
e4ab322f |
| 17-Sep-2023 |
Peter Zijlstra <[email protected]> |
cleanup: Add conditional guard support
Adds:
- DEFINE_GUARD_COND() / DEFINE_LOCK_GUARD_1_COND() to extend existing guards with conditional lock primitives, eg. mutex_trylock(), mutex_lock_in
cleanup: Add conditional guard support
Adds:
- DEFINE_GUARD_COND() / DEFINE_LOCK_GUARD_1_COND() to extend existing guards with conditional lock primitives, eg. mutex_trylock(), mutex_lock_interruptible().
nb. both primitives allow NULL 'locks', which cause the lock to fail (obviously).
- extends scoped_guard() to not take the body when the the conditional guard 'fails'. eg.
scoped_guard (mutex_intr, &task->signal_cred_guard_mutex) { ... }
will only execute the body when the mutex is held.
- provides scoped_cond_guard(name, fail, args...); which extends scoped_guard() to do fail when the lock-acquire fails.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/20231102110706.460851167%40infradead.org
show more ...
|
|
Revision tags: v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4 |
|
| #
54da6a09 |
| 26-May-2023 |
Peter Zijlstra <[email protected]> |
locking: Introduce __cleanup() based infrastructure
Use __attribute__((__cleanup__(func))) to build:
- simple auto-release pointers using __free()
- 'classes' with constructor and destructor sem
locking: Introduce __cleanup() based infrastructure
Use __attribute__((__cleanup__(func))) to build:
- simple auto-release pointers using __free()
- 'classes' with constructor and destructor semantics for scope-based resource management.
- lock guards based on the above classes.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/20230612093537.614161713%40infradead.org
show more ...
|
|
Revision tags: v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6 |
|
| #
4f64a6c9 |
| 27-Jan-2023 |
James Clark <[email protected]> |
perf: Fix perf_event_pmu_context serialization
Syzkaller triggered a WARN in put_pmu_ctx().
WARNING: CPU: 1 PID: 2245 at kernel/events/core.c:4925 put_pmu_ctx+0x1f0/0x278
This is because there i
perf: Fix perf_event_pmu_context serialization
Syzkaller triggered a WARN in put_pmu_ctx().
WARNING: CPU: 1 PID: 2245 at kernel/events/core.c:4925 put_pmu_ctx+0x1f0/0x278
This is because there is no locking around the access of "if (!epc->ctx)" in find_get_pmu_context() and when it is set to NULL in put_pmu_ctx().
The decrement of the reference count in put_pmu_ctx() also happens outside of the spinlock, leading to the possibility of this order of events, and the context being cleared in put_pmu_ctx(), after its refcount is non zero:
CPU0 CPU1 find_get_pmu_context() if (!epc->ctx) == false put_pmu_ctx() atomic_dec_and_test(&epc->refcount) == true epc->refcount == 0 atomic_inc(&epc->refcount); epc->refcount == 1 list_del_init(&epc->pmu_ctx_entry); epc->ctx = NULL;
Another issue is that WARN_ON for no active PMU events in put_pmu_ctx() is outside of the lock. If the perf_event_pmu_context is an embedded one, even after clearing it, it won't be deleted and can be re-used. So the warning can trigger. For this reason it also needs to be moved inside the lock.
The above warning is very quick to trigger on Arm by running these two commands at the same time:
while true; do perf record -- ls; done while true; do perf record -- ls; done
[peterz: atomic_dec_and_raw_lock*()] Fixes: bd2756811766 ("perf: Rewrite core context handling") Reported-by: [email protected] Signed-off-by: James Clark <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Ravi Bangoria <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3 |
|
| #
2747b93e |
| 25-Aug-2022 |
Sebastian Andrzej Siewior <[email protected]> |
locking: Detect includes rwlock.h outside of spinlock.h
From: Michael S. Tsirkin <[email protected]>
The check for __LINUX_SPINLOCK_H within rwlock.h (and other files) detects the direct include of th
locking: Detect includes rwlock.h outside of spinlock.h
From: Michael S. Tsirkin <[email protected]>
The check for __LINUX_SPINLOCK_H within rwlock.h (and other files) detects the direct include of the header file if it is at the very beginning of the include section. If it is listed later then chances are high that spinlock.h was already included (including rwlock.h) and the additional listing of rwlock.h will not cause any failure.
On PREEMPT_RT this additional rwlock.h will lead to compile failures since it uses a different rwlock implementation.
Add __LINUX_INSIDE_SPINLOCK_H to spinlock.h and check for this instead of __LINUX_SPINLOCK_H to detect wrong includes. This will help detect direct includes of rwlock.h with without PREEMPT_RT enabled.
[ bigeasy: add remaining __LINUX_SPINLOCK_H user and rewrite commit description. ]
Signed-off-by: Michael S. Tsirkin <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4 |
|
| #
f948666d |
| 30-Nov-2021 |
Marco Elver <[email protected]> |
locking/barriers, kcsan: Add instrumentation for barriers
Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of:
smp_mb() smp_rmb() smp_wmb()
locking/barriers, kcsan: Add instrumentation for barriers
Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of:
smp_mb() smp_rmb() smp_wmb() smp_store_release()
Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc3, v5.16-rc2, v5.16-rc1 |
|
| #
f5d80614 |
| 09-Nov-2021 |
Andy Shevchenko <[email protected]> |
kernel.h: drop unneeded <linux/kernel.h> inclusion from other headers
Patch series "kernel.h further split", v5.
kernel.h is a set of something which is not related to each other and often used in
kernel.h: drop unneeded <linux/kernel.h> inclusion from other headers
Patch series "kernel.h further split", v5.
kernel.h is a set of something which is not related to each other and often used in non-crossed compilation units, especially when drivers need only one or two macro definitions from it.
This patch (of 7):
There is no evidence we need kernel.h inclusion in certain headers. Drop unneeded <linux/kernel.h> inclusion from other headers.
[[email protected]: bottom_half.h needs kernel] Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Brendan Higgins <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Sakari Ailus <[email protected]> Cc: Laurent Pinchart <[email protected]> Cc: Mauro Carvalho Chehab <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Thorsten Leemhuis <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.15, v5.15-rc7 |
|
| #
f98a3dcc |
| 22-Oct-2021 |
Arnd Bergmann <[email protected]> |
locking: Remove spin_lock_flags() etc
parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable
locking: Remove spin_lock_flags() etc
parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable interrupts while waiting for a spinlock.
However, none of these can actually run into this codepath, because it is only called on architectures without CONFIG_GENERIC_LOCKBREAK, or when CONFIG_DEBUG_LOCK_ALLOC is set without CONFIG_LOCKDEP, and none of those combinations are possible on the three architectures.
Going back in the git history, it appears that arch/mn10300 may have been able to run into this code path, but there is a good chance that it never worked. On the architectures that still exist, it was already impossible to hit back in 2008 after the introduction of CONFIG_GENERIC_LOCKBREAK, and possibly earlier.
As this is all dead code, just remove it and the helper functions built around it. For arch/ia64, the inline asm could be cleaned up, but it seems safer to leave it untouched.
Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Helge Deller <[email protected]> # parisc Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6 |
|
| #
342a9324 |
| 15-Aug-2021 |
Thomas Gleixner <[email protected]> |
locking/spinlock: Provide RT variant header: <linux/spinlock_rt.h>
Provide the necessary wrappers around the actual rtmutex based spinlock implementation.
Signed-off-by: Thomas Gleixner <tglx@linut
locking/spinlock: Provide RT variant header: <linux/spinlock_rt.h>
Provide the necessary wrappers around the actual rtmutex based spinlock implementation.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
4f084ca7 |
| 15-Aug-2021 |
Thomas Gleixner <[email protected]> |
locking/spinlock: Split the lock types header, and move the raw types into <linux/spinlock_types_raw.h>
Move raw_spinlock into its own file. Prepare for RT 'sleeping spinlocks', to avoid header recu
locking/spinlock: Split the lock types header, and move the raw types into <linux/spinlock_types_raw.h>
Move raw_spinlock into its own file. Prepare for RT 'sleeping spinlocks', to avoid header recursion, as RT locks require rtmutex.h, which in turn requires the raw spinlock types.
No functional change.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1 |
|
| #
33def849 |
| 22-Oct-2020 |
Joe Perches <[email protected]> |
treewide: Convert macro and uses of __section(foo) to __section("foo")
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences.
Remove the q
treewide: Convert macro and uses of __section(foo) to __section("foo")
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences.
Remove the quote operator # from compiler_attributes.h __section macro.
Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms.
Conversion done using the script at:
https://lore.kernel.org/lkml/[email protected]/2-convert_section.pl
Signed-off-by: Joe Perches <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Miguel Ojeda <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2 |
|
| #
c935cd62 |
| 17-Jun-2020 |
Herbert Xu <[email protected]> |
lockdep: Split header file into lockdep and lockdep_types
There is a header file inclusion loop between asm-generic/bug.h and linux/kernel.h. This causes potential compile failurs depending on the
lockdep: Split header file into lockdep and lockdep_types
There is a header file inclusion loop between asm-generic/bug.h and linux/kernel.h. This causes potential compile failurs depending on the which file is included first. One way of breaking this loop is to stop spinlock_types.h from including lockdep.h. This patch splits lockdep.h into two files for this purpose.
Signed-off-by: Herbert Xu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Acked-by: Petr Mladek <[email protected]> Acked-by: Steven Rostedt (VMware) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7 |
|
| #
de8f5e4f |
| 21-Mar-2020 |
Peter Zijlstra <[email protected]> |
lockdep: Introduce wait-type checks
Extend lockdep to validate lock wait-type context.
The current wait-types are:
LD_WAIT_FREE, /* wait free, rcu etc.. */ LD_WAIT_SPIN, /* spin loops, raw_spi
lockdep: Introduce wait-type checks
Extend lockdep to validate lock wait-type context.
The current wait-types are:
LD_WAIT_FREE, /* wait free, rcu etc.. */ LD_WAIT_SPIN, /* spin loops, raw_spinlock_t etc.. */ LD_WAIT_CONFIG, /* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */ LD_WAIT_SLEEP, /* sleeping locks, mutex_t etc.. */
Where lockdep validates that the current lock (the one being acquired) fits in the current wait-context (as generated by the held stack).
This ensures that there is no attempt to acquire mutexes while holding spinlocks, to acquire spinlocks while holding raw_spinlocks and so on. In other words, its a more fancy might_sleep().
Obviously RCU made the entire ordeal more complex than a simple single value test because RCU can be acquired in (pretty much) any context and while it presents a context to nested locks it is not the same as it got acquired in.
Therefore its necessary to split the wait_type into two values, one representing the acquire (outer) and one representing the nested context (inner). For most 'normal' locks these two are the same.
[ To make static initialization easier we have the rule that: .outer == INV means .outer == .inner; because INV == 0. ]
It further means that its required to find the minimal .inner of the held stack to compare against the outer of the new lock; because while 'normal' RCU presents a CONFIG type to nested locks, if it is taken while already holding a SPIN type it obviously doesn't relax the rules.
Below is an example output generated by the trivial test code:
raw_spin_lock(&foo); spin_lock(&bar); spin_unlock(&bar); raw_spin_unlock(&foo);
[ BUG: Invalid wait context ] ----------------------------- swapper/0/1 is trying to lock: ffffc90000013f20 (&bar){....}-{3:3}, at: kernel_init+0xdb/0x187 other info that might help us debug this: 1 lock held by swapper/0/1: #0: ffffc90000013ee0 (&foo){+.+.}-{2:2}, at: kernel_init+0xd1/0x187
The way to read it is to look at the new -{n,m} part in the lock description; -{3:3} for the attempted lock, and try and match that up to the held locks, which in this case is the one: -{2,2}.
This tells that the acquiring lock requires a more relaxed environment than presented by the lock stack.
Currently only the normal locks and RCU are converted, the rest of the lockdep users defaults to .inner = INV which is ignored. More conversions can be done when desired.
The check for spinlock_t nesting is not enabled by default. It's a separate config option for now as there are known problems which are currently addressed. The config option allows to identify these problems and to verify that the solutions found are indeed solving them.
The config switch will be removed and the checks will permanently enabled once the vast majority of issues has been addressed.
[ bigeasy: Move LD_WAIT_FREE,… out of CONFIG_LOCKDEP to avoid compile failure with CONFIG_DEBUG_SPINLOCK + !CONFIG_LOCKDEP] [ tglx: Add the config option ]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2 |
|
| #
27972765 |
| 26-Jul-2019 |
Thomas Gleixner <[email protected]> |
locking/spinlocks: Use CONFIG_PREEMPTION
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on C
locking/spinlocks: Use CONFIG_PREEMPTION
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT.
Adjust the comments in the locking code.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8 |
|
| #
60ca1e5a |
| 22-Feb-2019 |
Will Deacon <[email protected]> |
mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors
Removing explicit calls to mmiowb() from driver code means that we must now call into the generic mmiowb_spin_{lock,unlock}() fu
mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors
Removing explicit calls to mmiowb() from driver code means that we must now call into the generic mmiowb_spin_{lock,unlock}() functions from the core spinlock code. In order to elide barriers following critical sections without any I/O writes, we also hook into the asm-generic I/O routines.
Acked-by: Linus Torvalds <[email protected]> Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1 |
|
| #
ff93bca7 |
| 14-Aug-2018 |
Cong Wang <[email protected]> |
ila: make lockdep happy again
Previously, alloc_ila_locks() and bucket_table_alloc() call spin_lock_init() separately, therefore they have two different lock names and lock class keys. However, afte
ila: make lockdep happy again
Previously, alloc_ila_locks() and bucket_table_alloc() call spin_lock_init() separately, therefore they have two different lock names and lock class keys. However, after commit b893281715ab ("ila: Call library function alloc_bucket_locks") they both call helper alloc_bucket_spinlocks() which now only has one lock name and lock class key. This causes a few bogus lockdep warnings as reported by syzbot.
Fix this by making alloc_bucket_locks() a macro and pass declaration name as lock name and a static lock class key inside the macro.
Fixes: b893281715ab ("ila: Call library function alloc_bucket_locks") Reported-by: <[email protected]> Cc: Tom Herbert <[email protected]> Signed-off-by: Cong Wang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6 |
|
| #
3d85b270 |
| 16-Jul-2018 |
Andrea Parri <[email protected]> |
locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()
There are 11 interpretations of the requirements described in the header comment for smp_mb__after_spinlock(): one for
locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()
There are 11 interpretations of the requirements described in the header comment for smp_mb__after_spinlock(): one for each LKMM maintainer, and one currently encoded in the Cat file. Stick to the latter (until a more satisfactory solution is available).
This also reworks some snippets related to the barrier to illustrate the requirements and to link them to the idioms which are relied upon at its call sites.
Suggested-by: Boqun Feng <[email protected]> Signed-off-by: Andrea Parri <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1 |
|
| #
ccfbb5be |
| 12-Jun-2018 |
Anna-Maria Gleixner <[email protected]> |
atomic: Add irqsave variant of atomic_dec_and_lock()
There are in-tree users of atomic_dec_and_lock() which must acquire the spin lock with interrupts disabled. To workaround the lack of an irqsave
atomic: Add irqsave variant of atomic_dec_and_lock()
There are in-tree users of atomic_dec_and_lock() which must acquire the spin lock with interrupts disabled. To workaround the lack of an irqsave variant of atomic_dec_and_lock() they use local_irq_save() at the call site. This causes extra code and creates in some places unneeded long interrupt disabled times. These places need also extra treatment for PREEMPT_RT due to the disconnect of the irq disabling and the lock function.
Implement the missing irqsave variant of the function.
Signed-off-by: Anna-Maria Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/[email protected]
show more ...
|
|
Revision tags: v4.17, v4.17-rc7, v4.17-rc6 |
|
| #
b7e4aade |
| 14-May-2018 |
Andrea Parri <[email protected]> |
locking/spinlocks: Document the semantics of spin_is_locked()
There appeared to be a certain, recurrent uncertainty concerning the semantics of spin_is_locked(), likely a consequence of the fact tha
locking/spinlocks: Document the semantics of spin_is_locked()
There appeared to be a certain, recurrent uncertainty concerning the semantics of spin_is_locked(), likely a consequence of the fact that this semantics remains undocumented or that it has been historically linked to the (likewise unclear) semantics of spin_unlock_wait().
A recent auditing [1] of the callers of the primitive confirmed that none of them are relying on particular ordering guarantees; document this semantics by adding a docbook header to spin_is_locked(). Also, describe behaviors specific to certain CONFIG_SMP=n builds.
[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2 https://marc.info/?l=linux-kernel&m=152042843808540&w=2 https://marc.info/?l=linux-kernel&m=152043346110262&w=2
Co-Developed-by: Andrea Parri <[email protected]> Co-Developed-by: Alan Stern <[email protected]> Co-Developed-by: David Howells <[email protected]> Signed-off-by: Andrea Parri <[email protected]> Signed-off-by: Alan Stern <[email protected]> Signed-off-by: David Howells <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Acked-by: Randy Dunlap <[email protected]> Cc: Akira Yokosawa <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Jade Alglave <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Luc Maranget <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2 |
|
| #
d89c7035 |
| 28-Nov-2017 |
Will Deacon <[email protected]> |
locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y
When CONFIG_GENERIC_LOCKBEAK=y, locking structures grow an extra int ->break_lock field which is used to implement raw_spin_is_c
locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y
When CONFIG_GENERIC_LOCKBEAK=y, locking structures grow an extra int ->break_lock field which is used to implement raw_spin_is_contended() by setting the field to 1 when waiting on a lock and clearing it to zero when holding a lock. However, there are a few problems with this approach:
- There is a write-write race between a CPU successfully taking the lock (and subsequently writing break_lock = 0) and a waiter waiting on the lock (and subsequently writing break_lock = 1). This could result in a contended lock being reported as uncontended and vice-versa.
- On machines with store buffers, nothing guarantees that the writes to break_lock are visible to other CPUs at any particular time.
- READ_ONCE/WRITE_ONCE are not used, so the field is potentially susceptible to harmful compiler optimisations,
Consequently, the usefulness of this field is unclear and we'd be better off removing it and allowing architectures to implement raw_spin_is_contended() by providing a definition of arch_spin_is_contended(), as they can when CONFIG_GENERIC_LOCKBREAK=n.
Signed-off-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Sebastian Ott <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
92f36cca |
| 04-Dec-2017 |
Tom Herbert <[email protected]> |
spinlock: Add library function to allocate spinlock buckets array
Add two new library functions: alloc_bucket_spinlocks and free_bucket_spinlocks. These are used to allocate and free an array of spi
spinlock: Add library function to allocate spinlock buckets array
Add two new library functions: alloc_bucket_spinlocks and free_bucket_spinlocks. These are used to allocate and free an array of spinlocks that are useful as locks for hash buckets. The interface specifies the maximum number of spinlocks in the array as well as a CPU multiplier to derive the number of spinlocks to allocate. The number allocated is rounded up to a power of two to make the array amenable to hash lookup.
Signed-off-by: Tom Herbert <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|