|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5 |
|
| #
d7fe143c |
| 24-Aug-2024 |
Ahmed Ehab <[email protected]> |
locking/lockdep: Avoid creating new name string literals in lockdep_set_subclass()
Syzbot reports a problem that a warning will be triggered while searching a lock class in look_up_lock_class().
Th
locking/lockdep: Avoid creating new name string literals in lockdep_set_subclass()
Syzbot reports a problem that a warning will be triggered while searching a lock class in look_up_lock_class().
The cause of the issue is that a new name is created and used by lockdep_set_subclass() instead of using the existing one. This results in a lock instance has a different name pointer than previous registered one stored in lock class, and WARN_ONCE() is triggered because of that in look_up_lock_class().
To fix this, change lockdep_set_subclass() to use the existing name instead of a new one. Hence, no new name will be created by lockdep_set_subclass(). Hence, the warning is avoided.
[boqun: Reword the commit log to state the correct issue]
Reported-by: <[email protected]> Fixes: de8f5e4f2dc1f ("lockdep: Introduce wait-type checks") Cc: [email protected] Signed-off-by: Ahmed Ehab <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Link: https://lore.kernel.org/lkml/[email protected]/
show more ...
|
|
Revision tags: v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1 |
|
| #
a97b43fa |
| 18-Jul-2024 |
Kent Overstreet <[email protected]> |
lockdep: Add comments for lockdep_set_no{validate,track}_class()
Cc: Waiman Long <[email protected]> Signed-off-by: Kent Overstreet <[email protected]>
|
|
Revision tags: v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7 |
|
| #
1a616c2f |
| 22-Dec-2023 |
Kent Overstreet <[email protected]> |
lockdep: lockdep_set_notrack_class()
Add a new helper to disable lockdep tracking entirely for a given class.
This is needed for bcachefs, which takes too many btree node locks for lockdep to track
lockdep: lockdep_set_notrack_class()
Add a new helper to disable lockdep tracking entirely for a given class.
This is needed for bcachefs, which takes too many btree node locks for lockdep to track. Instead, we have a single lockdep_map for "btree_trans has any btree nodes locked", which makes more since given that we have centralized lock management and a cycle detector.
Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Signed-off-by: Kent Overstreet <[email protected]>
show more ...
|
| #
c5bcab75 |
| 20-Jun-2024 |
Sebastian Andrzej Siewior <[email protected]> |
locking/local_lock: Add local nested BH locking infrastructure.
Add local_lock_nested_bh() locking. It is based on local_lock_t and the naming follows the preempt_disable_nested() example.
For !PRE
locking/local_lock: Add local nested BH locking infrastructure.
Add local_lock_nested_bh() locking. It is based on local_lock_t and the naming follows the preempt_disable_nested() example.
For !PREEMPT_RT + !LOCKDEP it is a per-CPU annotation for locking assumptions based on local_bh_disable(). The macro is optimized away during compilation. For !PREEMPT_RT + LOCKDEP the local_lock_nested_bh() is reduced to the usual lock-acquire plus lockdep_assert_in_softirq() - ensuring that BH is disabled.
For PREEMPT_RT local_lock_nested_bh() acquires the specified per-CPU lock. It does not disable CPU migration because it relies on local_bh_disable() disabling CPU migration. With LOCKDEP it performans the usual lockdep checks as with !PREEMPT_RT. Due to include hell the softirq check has been moved spinlock.c.
The intention is to use this locking in places where locking of a per-CPU variable relies on BH being disabled. Instead of treating disabled bottom halves as a big per-CPU lock, PREEMPT_RT can use this to reduce the locking scope to what actually needs protecting. A side effect is that it also documents the protection scope of the per-CPU variables.
Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
c9d52fb3 |
| 31-May-2024 |
Dan Williams <[email protected]> |
PCI: Revert the cfg_access_lock lockdep mechanism
While the experiment did reveal that there are additional places that are missing the lock during secondary bus reset, one of the places that needs
PCI: Revert the cfg_access_lock lockdep mechanism
While the experiment did reveal that there are additional places that are missing the lock during secondary bus reset, one of the places that needs to take cfg_access_lock (pci_bus_lock()) is not prepared for lockdep annotation.
Specifically, pci_bus_lock() takes pci_dev_lock() recursively and is currently dependent on the fact that the device_lock() is marked lockdep_set_novalidate_class(&dev->mutex). Otherwise, without that annotation, pci_bus_lock() would need to use something like a new pci_dev_lock_nested() helper, a scheme to track a PCI device's depth in the topology, and a hope that the depth of a PCI tree never exceeds the max value for a lockdep subclass.
The alternative to ripping out the lockdep coverage would be to deploy a dynamic lock key for every PCI device. Unfortunately, there is evidence that increasing the number of keys that lockdep needs to track to be per-PCI-device is prohibitively expensive for something like the cfg_access_lock.
The main motivation for adding the annotation in the first place was to catch unlocked secondary bus resets, not necessarily catch lock ordering problems between cfg_access_lock and other locks. Solve that narrower problem with follow-on patches, and just due to targeted revert for now.
Link: https://lore.kernel.org/r/171711746402.1628941.14575335981264103013.stgit@dwillia2-xfh.jf.intel.com Fixes: 7e89efc6e9e4 ("PCI: Lock upstream bridge for pci_reset_function()") Reported-by: Imre Deak <[email protected]> Closes: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_134186v1/shard-dg2-1/igt@[email protected] Signed-off-by: Dan Williams <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]> Tested-by: Hans de Goede <[email protected]> Tested-by: Kalle Valo <[email protected]> Reviewed-by: Dave Jiang <[email protected]> Cc: Jani Saarinen <[email protected]>
show more ...
|
| #
7e89efc6 |
| 02-May-2024 |
Dave Jiang <[email protected]> |
PCI: Lock upstream bridge for pci_reset_function()
Fix a long-standing locking gap for missing pci_cfg_access_lock() while manipulating bridge reset registers and configuration during pci_reset_bus_
PCI: Lock upstream bridge for pci_reset_function()
Fix a long-standing locking gap for missing pci_cfg_access_lock() while manipulating bridge reset registers and configuration during pci_reset_bus_function().
If there is an upstream bridge, lock it before locking the device itself. pci_dev_lock() calls pci_cfg_access_lock(), which blocks the writing of PCI config space by user space.
Add lockdep assertion via pci_dev->cfg_access_lock to verify pci_dev->block_cfg_access is set.
Co-developed-by: Dan Williams <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Dan Williams <[email protected]> Signed-off-by: Dave Jiang <[email protected]> [bhelgaas: commit log] Signed-off-by: Bjorn Helgaas <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc6 |
|
| #
99bac366 |
| 11-Dec-2023 |
Kent Overstreet <[email protected]> |
lockdep: move held_lock to lockdep_types.h
held_lock is embedded in task_struct, and we don't want sched.h pulling in all of lockdep.h
Signed-off-by: Kent Overstreet <[email protected]> Ack
lockdep: move held_lock to lockdep_types.h
held_lock is embedded in task_struct, and we don't want sched.h pulling in all of lockdep.h
Signed-off-by: Kent Overstreet <[email protected]> Acked-by: Waiman Long <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
ff4e538c |
| 04-Aug-2023 |
Jakub Kicinski <[email protected]> |
page_pool: add a lockdep check for recycling in hardirq
Page pool use in hardirq is prohibited, add debug checks to catch misuses. IIRC we previously discussed using DEBUG_NET_WARN_ON_ONCE() for thi
page_pool: add a lockdep check for recycling in hardirq
Page pool use in hardirq is prohibited, add debug checks to catch misuses. IIRC we previously discussed using DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns that people will have DEBUG_NET enabled in perf testing. I don't think anyone enables lockdep in perf testing, so use lockdep to avoid pushback and arguing :)
Acked-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Alexander Duyck <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2 |
|
| #
eb1cfd09 |
| 09-May-2023 |
Kent Overstreet <[email protected]> |
lockdep: Add lock_set_cmp_fn() annotation
This implements a new interface to lockdep, lock_set_cmp_fn(), for defining a custom ordering when taking multiple locks of the same class.
This is an alte
lockdep: Add lock_set_cmp_fn() annotation
This implements a new interface to lockdep, lock_set_cmp_fn(), for defining a custom ordering when taking multiple locks of the same class.
This is an alternative to subclasses, but can not fully replace them since subclasses allow lock hierarchies with other clasees inter-twined, while this relies on pure class nesting.
Specifically, if A is our nesting class then:
A/0 <- B <- A/1
Would be a valid lock order with subclasses (each subclass really is a full class from the validation PoV) but not with this annotation, which requires all nesting to be consecutive.
Example output:
| ============================================ | WARNING: possible recursive locking detected | 6.2.0-rc8-00003-g7d81e591ca6a-dirty #15 Not tainted | -------------------------------------------- | kworker/14:3/938 is trying to acquire lock: | ffff8880143218c8 (&b->lock l=0 0:2803368){++++}-{3:3}, at: bch_btree_node_get.part.0+0x81/0x2b0 | | but task is already holding lock: | ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0 | and the lock comparison function returns 1: | | other info that might help us debug this: | Possible unsafe locking scenario: | | CPU0 | ---- | lock(&b->lock l=1 1048575:9223372036854775807); | lock(&b->lock l=0 0:2803368); | | *** DEADLOCK *** | | May be due to missing lock nesting notation | | 3 locks held by kworker/14:3/938: | #0: ffff888005ea9d38 ((wq_completion)bcache){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530 | #1: ffff8880098c3e70 ((work_completion)(&cl->work)#3){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530 | #2: ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0
[peterz: extended changelog] Signed-off-by: Kent Overstreet <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc1 |
|
| #
0cce06ba |
| 25-Apr-2023 |
Peter Zijlstra <[email protected]> |
debugobjects,locking: Annotate debug_object_fill_pool() wait type violation
There is an explicit wait-type violation in debug_object_fill_pool() for PREEMPT_RT=n kernels which allows them to more ea
debugobjects,locking: Annotate debug_object_fill_pool() wait type violation
There is an explicit wait-type violation in debug_object_fill_pool() for PREEMPT_RT=n kernels which allows them to more easily fill the object pool and reduce the chance of allocation failures.
Lockdep's wait-type checks are designed to check the PREEMPT_RT locking rules even for PREEMPT_RT=n kernels and object to this, so create a lockdep annotation to allow this to stand.
Specifically, create a 'lock' type that overrides the inner wait-type while it is held -- allowing one to temporarily raise it, such that the violation is hidden.
Reported-by: Vlastimil Babka <[email protected]> Reported-by: Qi Zheng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Qi Zheng <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4 |
|
| #
0471db44 |
| 13-Jan-2023 |
Boqun Feng <[email protected]> |
locking/lockdep: Improve the deadlock scenario print for sync and read lock
Lock scenario print is always a weak spot of lockdep splats. Improvement can be made if we rework the dependency search an
locking/lockdep: Improve the deadlock scenario print for sync and read lock
Lock scenario print is always a weak spot of lockdep splats. Improvement can be made if we rework the dependency search and the error printing.
However without touching the graph search, we can improve a little for the circular deadlock case, since we have the to-be-added lock dependency, and know whether these two locks are read/write/sync.
In order to know whether a held_lock is sync or not, a bit was "stolen" from ->references, which reduce our limit for the same lock class nesting from 2^12 to 2^11, and it should still be good enough.
Besides, since we now have bit in held_lock for sync, we don't need the "hardirqoffs being 1" trick, and also we can avoid the __lock_release() if we jump out of __lock_acquire() before the held_lock stored.
With these changes, a deadlock case evolved with read lock and sync gets a better print-out from:
[...] Possible unsafe locking scenario: [...] [...] CPU0 CPU1 [...] ---- ---- [...] lock(srcuA); [...] lock(srcuB); [...] lock(srcuA); [...] lock(srcuB);
to
[...] Possible unsafe locking scenario: [...] [...] CPU0 CPU1 [...] ---- ---- [...] rlock(srcuA); [...] lock(srcuB); [...] lock(srcuA); [...] sync(srcuB);
Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Boqun Feng <[email protected]>
show more ...
|
| #
2f1f043e |
| 13-Jan-2023 |
Boqun Feng <[email protected]> |
locking/lockdep: Introduce lock_sync()
Currently, functions like synchronize_srcu() do not have lockdep annotations resembling those of other write-side locking primitives. Such annotations might lo
locking/lockdep: Introduce lock_sync()
Currently, functions like synchronize_srcu() do not have lockdep annotations resembling those of other write-side locking primitives. Such annotations might look as follows:
lock_acquire(); lock_release();
Such annotations would tell lockdep that synchronize_srcu() acts like an empty critical section that waits for other (read-side) critical sections to finish. This would definitely catch some deadlock, but as pointed out by Paul Mckenney [1], this could also introduce false positives because of irq-safe/unsafe detection. Of course, there are tricks could help with this:
might_sleep(); // Existing statement in __synchronize_srcu(). if (IS_ENABLED(CONFIG_PROVE_LOCKING)) { local_irq_disable(); lock_acquire(); lock_release(); local_irq_enable(); }
But it would be better for lockdep to provide a separate annonation for functions like synchronize_srcu(), so that people won't need to repeat the ugly tricks above.
Therefore introduce lock_sync(), which is simply an lock+unlock pair with no irq safe/unsafe deadlock check. This works because the to-be-annontated functions do not create real critical sections, and there is therefore no way that irq can create extra dependencies.
[1]: https://lore.kernel.org/lkml/20180412021233.ewncg5jjuzjw3x62@tardis/
Signed-off-by: Boqun Feng <[email protected]> Acked-by: Waiman Long <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> [ boqun: Fix typos reported by Davidlohr Bueso and Paul E. Mckenney ] Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Boqun Feng <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2 |
|
| #
2edcedcd |
| 09-Jun-2022 |
Sebastian Andrzej Siewior <[email protected]> |
locking/lockdep: Remove lockdep_init_map_crosslock.
The cross-release bits have been removed, lockdep_init_map_crosslock() is a leftover.
Remove lockdep_init_map_crosslock.
Signed-off-by: Sebastia
locking/lockdep: Remove lockdep_init_map_crosslock.
The cross-release bits have been removed, lockdep_init_map_crosslock() is a leftover.
Remove lockdep_init_map_crosslock.
Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Waiman Long <[email protected]> Acked-by: Waiman Long <[email protected]> Link: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
eae6d58d |
| 17-Jun-2022 |
Peter Zijlstra <[email protected]> |
locking/lockdep: Fix lockdep_init_map_*() confusion
Commit dfd5e3f5fe27 ("locking/lockdep: Mark local_lock_t") added yet another lockdep_init_map_*() variant, but forgot to update all the existing u
locking/lockdep: Fix lockdep_init_map_*() confusion
Commit dfd5e3f5fe27 ("locking/lockdep: Mark local_lock_t") added yet another lockdep_init_map_*() variant, but forgot to update all the existing users of the most complicated version.
This could lead to a loss of lock_type and hence an incorrect report. Given the relative rarity of both local_lock and these annotations, this is unlikely to happen in practise, still, best fix things.
Fixes: dfd5e3f5fe27 ("locking/lockdep: Mark local_lock_t") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5 |
|
| #
d864b8ea |
| 26-Apr-2022 |
Dan Williams <[email protected]> |
cxl/acpi: Add root device lockdep validation
The CXL "root" device, ACPI0017, is an attach point for coordinating platform level CXL resources and is the parent device for a CXL port topology tree.
cxl/acpi: Add root device lockdep validation
The CXL "root" device, ACPI0017, is an attach point for coordinating platform level CXL resources and is the parent device for a CXL port topology tree. As such it has distinct locking rules relative to other CXL subsystem objects, but because it is an ACPI device the lock class is established well before it is given to the cxl_acpi driver.
However, the lockdep API does support changing the lock class "live" for situations like this. Add a device_lock_set_class() helper that a driver can use in ->probe() to set a custom lock class, and device_lock_reset_class() to return to the default "no validate" class before the custom lock class key goes out of scope after ->remove().
Note the helpers are all macros to support dead code elimination in the CONFIG_PROVE_LOCKING=n case, however device_set_lock_class() still needs #ifdef CONFIG_PROVE_LOCKING since lockdep_match_class() explicitly does not have a helper in the CONFIG_PROVE_LOCKING=n case (see comment in lockdep.h). The lockdep API needs 2 small tweaks to prevent "unused" warnings for the @key argument to lock_set_class(), and a new lock_set_novalidate_class() is added to supplement lockdep_set_novalidate_class() in the cases where the lock class is converted while the lock is held.
Suggested-by: Peter Zijlstra <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Alison Schofield <[email protected]> Cc: Vishal Verma <[email protected]> Cc: Ben Widawsky <[email protected]> Cc: Jonathan Cameron <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Reviewed-by: Ira Weiny <[email protected]> Link: https://lore.kernel.org/r/165100081305.1528964.11138612430659737238.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5 |
|
| #
f79c9b8a |
| 18-Feb-2022 |
tangmeng <[email protected]> |
kernel/lockdep: move lockdep sysctls to its own file
kernel/sysctl.c is a kitchen sink where everyone leaves their dirty dishes, this makes it very difficult to maintain.
To help with this maintena
kernel/lockdep: move lockdep sysctls to its own file
kernel/sysctl.c is a kitchen sink where everyone leaves their dirty dishes, this makes it very difficult to maintain.
To help with this maintenance let's start by moving sysctls to places where they actually belong. The proc sysctl maintainers do not want to know what sysctl knobs you wish to add for your own piece of code, we just care about the core logic.
All filesystem syctls now get reviewed by fs folks. This commit follows the commit of fs, move the prove_locking and lock_stat sysctls to its own file, kernel/lockdep.c.
Signed-off-by: tangmeng <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
| #
351bdbb6 |
| 21-Mar-2022 |
Sebastian Andrzej Siewior <[email protected]> |
net: Revert the softirq will run annotation in ____napi_schedule().
The lockdep annotation lockdep_assert_softirq_will_run() expects that either hard or soft interrupts are disabled because both gua
net: Revert the softirq will run annotation in ____napi_schedule().
The lockdep annotation lockdep_assert_softirq_will_run() expects that either hard or soft interrupts are disabled because both guaranty that the "raised" soft-interrupts will be processed once the context is left.
This triggers in flush_smp_call_function_from_idle() but it this case it explicitly calls do_softirq() in case of pending softirqs.
Revert the "softirq will run" annotation in ____napi_schedule() and move the check back to __netif_rx() as it was. Keep the IRQ-off assert in ____napi_schedule() because this is always required.
Fixes: fbd9a2ceba5c7 ("net: Add lockdep asserts to ____napi_schedule().") Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Jason A. Donenfeld <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
fbd9a2ce |
| 11-Mar-2022 |
Sebastian Andrzej Siewior <[email protected]> |
net: Add lockdep asserts to ____napi_schedule().
____napi_schedule() needs to be invoked with disabled interrupts due to __raise_softirq_irqoff (in order not to corrupt the per-CPU list). ____napi_s
net: Add lockdep asserts to ____napi_schedule().
____napi_schedule() needs to be invoked with disabled interrupts due to __raise_softirq_irqoff (in order not to corrupt the per-CPU list). ____napi_schedule() needs also to be invoked from an interrupt context so that the raised-softirq is processed while the interrupt context is left.
Add lockdep asserts for both conditions. While this is the second time the irq/softirq check is needed, provide a generic lockdep_assert_softirq_will_run() which is used by both caller.
Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7 |
|
| #
f98a3dcc |
| 22-Oct-2021 |
Arnd Bergmann <[email protected]> |
locking: Remove spin_lock_flags() etc
parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable
locking: Remove spin_lock_flags() etc
parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable interrupts while waiting for a spinlock.
However, none of these can actually run into this codepath, because it is only called on architectures without CONFIG_GENERIC_LOCKBREAK, or when CONFIG_DEBUG_LOCK_ALLOC is set without CONFIG_LOCKDEP, and none of those combinations are possible on the three architectures.
Going back in the git history, it appears that arch/mn10300 may have been able to run into this code path, but there is a good chance that it never worked. On the architectures that still exist, it was already impossible to hit back in 2008 after the introduction of CONFIG_GENERIC_LOCKBREAK, and possibly earlier.
As this is all dead code, just remove it and the helper functions built around it. For arch/ia64, the inline asm could be cleaned up, but it seems safer to leave it untouched.
Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Helge Deller <[email protected]> # parisc Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5 |
|
| #
d19c8137 |
| 02-Aug-2021 |
Peter Zijlstra <[email protected]> |
locking/lockdep: Provide lockdep_assert{,_once}() helpers
Extract lockdep_assert{,_once}() helpers to more easily write composite assertions like, for example:
lockdep_assert(lockdep_is_held(&drm_
locking/lockdep: Provide lockdep_assert{,_once}() helpers
Extract lockdep_assert{,_once}() helpers to more easily write composite assertions like, for example:
lockdep_assert(lockdep_is_held(&drm_device.master_mutex) || lockdep_is_held(&drm_file.master_lookup_lock));
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Desmond Cheong Zhi Xi <[email protected]> Acked-by: Boqun Feng <[email protected]> Acked-by: Waiman Long <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Daniel Vetter <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
show more ...
|
|
Revision tags: v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5 |
|
| #
e2db7592 |
| 22-Mar-2021 |
Ingo Molnar <[email protected]> |
locking: Fix typos in comments
Fix ~16 single-word typos in locking code comments.
Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Paul E. McKenney <paulmck@k
locking: Fix typos in comments
Fix ~16 single-word typos in locking code comments.
Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse |
|
| #
f8cfa466 |
| 27-Feb-2021 |
Shuah Khan <[email protected]> |
lockdep: Add lockdep lock state defines
Adds defines for lock state returns from lock_is_held_type() based on Johannes Berg's suggestions as it make it easier to read and maintain the lock states. T
lockdep: Add lockdep lock state defines
Adds defines for lock state returns from lock_is_held_type() based on Johannes Berg's suggestions as it make it easier to read and maintain the lock states. These are defines and a enum to avoid changes to lock_is_held_type() and lockdep_is_held() return types.
Updates to lock_is_held_type() and __lock_is_held() to use the new defines.
Signed-off-by: Shuah Khan <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/linux-wireless/[email protected]/
show more ...
|
| #
3e31f947 |
| 27-Feb-2021 |
Shuah Khan <[email protected]> |
lockdep: Add lockdep_assert_not_held()
Some kernel functions must be called without holding a specific lock. Add lockdep_assert_not_held() to be used in these functions to detect incorrect calls whi
lockdep: Add lockdep_assert_not_held()
Some kernel functions must be called without holding a specific lock. Add lockdep_assert_not_held() to be used in these functions to detect incorrect calls while holding a lock.
lockdep_assert_not_held() provides the opposite functionality of lockdep_assert_held() which is used to assert calls that require holding a specific lock.
Incorporates suggestions from Peter Zijlstra to avoid misfires when lockdep_off() is employed.
The need for lockdep_assert_not_held() came up in a discussion on ath10k patch. ath10k_drain_tx() and i915_vma_pin_ww() are examples of functions that can use lockdep_assert_not_held().
Signed-off-by: Shuah Khan <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/linux-wireless/[email protected]/
show more ...
|
|
Revision tags: v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4 |
|
| #
7621350c |
| 15-Jan-2021 |
Christian König <[email protected]> |
drm/syncobj: make lockdep complain on WAIT_FOR_SUBMIT v3
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT can't be used when we hold locks since we are basically waiting for userspace to do something.
Holdin
drm/syncobj: make lockdep complain on WAIT_FOR_SUBMIT v3
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT can't be used when we hold locks since we are basically waiting for userspace to do something.
Holding a lock while doing so can trivial deadlock with page faults etc...
So make lockdep complain when a driver tries to do this.
v2: Add lockdep_assert_none_held() macro. v3: Add might_sleep() and also use lockdep_assert_none_held() in the IOCTL path.
Signed-off-by: Christian König <[email protected]> Reviewed-by: Daniel Vetter <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://patchwork.freedesktop.org/patch/414944/
show more ...
|
|
Revision tags: v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10 |
|
| #
dfd5e3f5 |
| 09-Dec-2020 |
Peter Zijlstra <[email protected]> |
locking/lockdep: Mark local_lock_t
The local_lock_t's are special, because they cannot form IRQ inversions, make sure we can tell them apart from the rest of the locks.
Signed-off-by: Peter Zijlstr
locking/lockdep: Mark local_lock_t
The local_lock_t's are special, because they cannot form IRQ inversions, make sure we can tell them apart from the rest of the locks.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
show more ...
|