|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7 |
|
| #
1a616c2f |
| 22-Dec-2023 |
Kent Overstreet <[email protected]> |
lockdep: lockdep_set_notrack_class()
Add a new helper to disable lockdep tracking entirely for a given class.
This is needed for bcachefs, which takes too many btree node locks for lockdep to track
lockdep: lockdep_set_notrack_class()
Add a new helper to disable lockdep tracking entirely for a given class.
This is needed for bcachefs, which takes too many btree node locks for lockdep to track. Instead, we have a single lockdep_map for "btree_trans has any btree nodes locked", which makes more since given that we have centralized lock management and a cycle detector.
Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Signed-off-by: Kent Overstreet <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc6 |
|
| #
99bac366 |
| 11-Dec-2023 |
Kent Overstreet <[email protected]> |
lockdep: move held_lock to lockdep_types.h
held_lock is embedded in task_struct, and we don't want sched.h pulling in all of lockdep.h
Signed-off-by: Kent Overstreet <[email protected]> Ack
lockdep: move held_lock to lockdep_types.h
held_lock is embedded in task_struct, and we don't want sched.h pulling in all of lockdep.h
Signed-off-by: Kent Overstreet <[email protected]> Acked-by: Waiman Long <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc5, v6.7-rc4, v6.7-rc3 |
|
| #
18caaeda |
| 26-Nov-2023 |
Christophe JAILLET <[email protected]> |
locking/lockdep: Slightly reorder 'struct lock_class' to save some memory
Based on pahole, 2 holes can be combined in the 'struct lock_class'. This saves 8 bytes in the structure on my x86_64.
On a
locking/lockdep: Slightly reorder 'struct lock_class' to save some memory
Based on pahole, 2 holes can be combined in the 'struct lock_class'. This saves 8 bytes in the structure on my x86_64.
On a x86_64 configured with allmodconfig, this saves ~64kb of memory in 'kernel/locking/lockdep.o':
text data bss dec filename Before: 102,501 1,912,490 11,531,636 13,546,627 kernel/locking/lockdep.o After: 102,181 1,912,490 11,466,100 13,480,771 kernel/locking/lockdep.o
because of:
struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
After the reorder, pahole gives:
struct lock_class { struct hlist_node hash_entry; /* 0 16 */ struct list_head lock_entry; /* 16 16 */ struct list_head locks_after; /* 32 16 */ struct list_head locks_before; /* 48 16 */ /* --- cacheline 1 boundary (64 bytes) --- */ const struct lockdep_subclass_key * key; /* 64 8 */ lock_cmp_fn cmp_fn; /* 72 8 */ lock_print_fn print_fn; /* 80 8 */ unsigned int subclass; /* 88 4 */ unsigned int dep_gen_id; /* 92 4 */ long unsigned int usage_mask; /* 96 8 */ const struct lock_trace * usage_traces[10]; /* 104 80 */ /* --- cacheline 2 boundary (128 bytes) was 56 bytes ago --- */ const char * name; /* 184 8 */ /* --- cacheline 3 boundary (192 bytes) --- */ int name_version; /* 192 4 */ u8 wait_type_inner; /* 196 1 */ u8 wait_type_outer; /* 197 1 */ u8 lock_type; /* 198 1 */
/* XXX 1 byte hole, try to pack */
long unsigned int contention_point[4]; /* 200 32 */ long unsigned int contending_point[4]; /* 232 32 */
/* size: 264, cachelines: 5, members: 18 */ /* sum members: 263, holes: 1, sum holes: 1 */ /* last cacheline: 8 bytes */ };
Signed-off-by: Christophe JAILLET <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Waiman Long <[email protected]> Link: https://lore.kernel.org/r/801258371fc4101f96495a5aaecef638d6cbd8d3.1700988869.git.christophe.jaillet@wanadoo.fr
show more ...
|
|
Revision tags: v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2 |
|
| #
eb1cfd09 |
| 09-May-2023 |
Kent Overstreet <[email protected]> |
lockdep: Add lock_set_cmp_fn() annotation
This implements a new interface to lockdep, lock_set_cmp_fn(), for defining a custom ordering when taking multiple locks of the same class.
This is an alte
lockdep: Add lock_set_cmp_fn() annotation
This implements a new interface to lockdep, lock_set_cmp_fn(), for defining a custom ordering when taking multiple locks of the same class.
This is an alternative to subclasses, but can not fully replace them since subclasses allow lock hierarchies with other clasees inter-twined, while this relies on pure class nesting.
Specifically, if A is our nesting class then:
A/0 <- B <- A/1
Would be a valid lock order with subclasses (each subclass really is a full class from the validation PoV) but not with this annotation, which requires all nesting to be consecutive.
Example output:
| ============================================ | WARNING: possible recursive locking detected | 6.2.0-rc8-00003-g7d81e591ca6a-dirty #15 Not tainted | -------------------------------------------- | kworker/14:3/938 is trying to acquire lock: | ffff8880143218c8 (&b->lock l=0 0:2803368){++++}-{3:3}, at: bch_btree_node_get.part.0+0x81/0x2b0 | | but task is already holding lock: | ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0 | and the lock comparison function returns 1: | | other info that might help us debug this: | Possible unsafe locking scenario: | | CPU0 | ---- | lock(&b->lock l=1 1048575:9223372036854775807); | lock(&b->lock l=0 0:2803368); | | *** DEADLOCK *** | | May be due to missing lock nesting notation | | 3 locks held by kworker/14:3/938: | #0: ffff888005ea9d38 ((wq_completion)bcache){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530 | #1: ffff8880098c3e70 ((work_completion)(&cl->work)#3){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530 | #2: ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0
[peterz: extended changelog] Signed-off-by: Kent Overstreet <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc1 |
|
| #
0cce06ba |
| 25-Apr-2023 |
Peter Zijlstra <[email protected]> |
debugobjects,locking: Annotate debug_object_fill_pool() wait type violation
There is an explicit wait-type violation in debug_object_fill_pool() for PREEMPT_RT=n kernels which allows them to more ea
debugobjects,locking: Annotate debug_object_fill_pool() wait type violation
There is an explicit wait-type violation in debug_object_fill_pool() for PREEMPT_RT=n kernels which allows them to more easily fill the object pool and reduce the chance of allocation failures.
Lockdep's wait-type checks are designed to check the PREEMPT_RT locking rules even for PREEMPT_RT=n kernels and object to this, so create a lockdep annotation to allow this to stand.
Specifically, create a 'lock' type that overrides the inner wait-type while it is held -- allowing one to temporarily raise it, such that the violation is hidden.
Reported-by: Vlastimil Babka <[email protected]> Reported-by: Qi Zheng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Qi Zheng <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6 |
|
| #
a2e05ddd |
| 11-Aug-2021 |
Zhouyi Zhou <[email protected]> |
lockdep: Improve comments in wait-type checks
Comments in wait-type checks be improved by mentioning the PREEPT_RT kernel configure option.
Signed-off-by: Zhouyi Zhou <[email protected]> Signed-
lockdep: Improve comments in wait-type checks
Comments in wait-type checks be improved by mentioning the PREEPT_RT kernel configure option.
Signed-off-by: Zhouyi Zhou <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2 |
|
| #
93d0955e |
| 12-May-2021 |
Ingo Molnar <[email protected]> |
locking: Fix comment typos
A few snuck through.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Revision tags: v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10 |
|
| #
dfd5e3f5 |
| 09-Dec-2020 |
Peter Zijlstra <[email protected]> |
locking/lockdep: Mark local_lock_t
The local_lock_t's are special, because they cannot form IRQ inversions, make sure we can tell them apart from the rest of the locks.
Signed-off-by: Peter Zijlstr
locking/lockdep: Mark local_lock_t
The local_lock_t's are special, because they cannot form IRQ inversions, make sure we can tell them apart from the rest of the locks.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8 |
|
| #
2bb8945b |
| 30-Sep-2020 |
Peter Zijlstra <[email protected]> |
lockdep: Fix usage_traceoverflow
Basically print_lock_class_header()'s for loop is out of sync with the the size of of ->usage_traces[].
Also clean things up a bit while at it, to avoid such mishap
lockdep: Fix usage_traceoverflow
Basically print_lock_class_header()'s for loop is out of sync with the the size of of ->usage_traces[].
Also clean things up a bit while at it, to avoid such mishaps in the future.
Fixes: 23870f122768 ("locking/lockdep: Fix "USED" <- "IN-NMI" inversions") Reported-by: Qian Cai <[email protected]> Debugged-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Qian Cai <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6 |
|
| #
e885d5d9 |
| 16-Jul-2020 |
Herbert Xu <[email protected]> |
lockdep: Move list.h inclusion into lockdep.h
Currently lockdep_types.h includes list.h without actually using any of its macros or functions. All it needs are the type definitions which were moved
lockdep: Move list.h inclusion into lockdep.h
Currently lockdep_types.h includes list.h without actually using any of its macros or functions. All it needs are the type definitions which were moved into types.h long ago. This potentially causes inclusion loops because both are included by many core header files.
This patch moves the list.h inclusion into lockdep.h. Note that we could probably remove it completely but that could potentially result in compile failures should any end users not include list.h directly and also be unlucky enough to not get list.h via some other header file.
Reported-by: Petr Mladek <[email protected]> Signed-off-by: Herbert Xu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Petr Mladek <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
5be542e9 |
| 16-Jul-2020 |
Herbert Xu <[email protected]> |
lockdep: Move list.h inclusion into lockdep.h
Currently lockdep_types.h includes list.h without actually using any of its macros or functions. All it needs are the type definitions which were moved
lockdep: Move list.h inclusion into lockdep.h
Currently lockdep_types.h includes list.h without actually using any of its macros or functions. All it needs are the type definitions which were moved into types.h long ago. This potentially causes inclusion loops because both are included by many core header files.
This patch moves the list.h inclusion into lockdep.h. Note that we could probably remove it completely but that could potentially result in compile failures should any end users not include list.h directly and also be unlucky enough to not get list.h via some other header file.
Reported-by: Petr Mladek <[email protected]> Signed-off-by: Herbert Xu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Petr Mladek <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2 |
|
| #
c935cd62 |
| 17-Jun-2020 |
Herbert Xu <[email protected]> |
lockdep: Split header file into lockdep and lockdep_types
There is a header file inclusion loop between asm-generic/bug.h and linux/kernel.h. This causes potential compile failurs depending on the
lockdep: Split header file into lockdep and lockdep_types
There is a header file inclusion loop between asm-generic/bug.h and linux/kernel.h. This causes potential compile failurs depending on the which file is included first. One way of breaking this loop is to stop spinlock_types.h from including lockdep.h. This patch splits lockdep.h into two files for this purpose.
Signed-off-by: Herbert Xu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Acked-by: Petr Mladek <[email protected]> Acked-by: Steven Rostedt (VMware) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|