|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3 |
|
| #
8e2bad54 |
| 10-Feb-2025 |
Thadeu Lima de Souza Cascardo <[email protected]> |
dlm: prevent NPD when writing a positive value to event_done
do_uevent returns the value written to event_done. In case it is a positive value, new_lockspace would undo all the work, and lockspace w
dlm: prevent NPD when writing a positive value to event_done
do_uevent returns the value written to event_done. In case it is a positive value, new_lockspace would undo all the work, and lockspace would not be set. __dlm_new_lockspace, however, would treat that positive value as a success due to commit 8511a2728ab8 ("dlm: fix use count with multiple joins").
Down the line, device_create_lockspace would pass that NULL lockspace to dlm_find_lockspace_local, leading to a NULL pointer dereference.
Treating such positive values as successes prevents the problem. Given this has been broken for so long, this is unlikely to break userspace expectations.
Fixes: 8511a2728ab8 ("dlm: fix use count with multiple joins") Signed-off-by: Thadeu Lima de Souza Cascardo <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2 |
|
| #
94e180d6 |
| 02-Aug-2024 |
Alexander Aring <[email protected]> |
dlm: async freeing of lockspace resources
This patch handles freeing of lockspace resources asynchronously besides the release_lockspace() context. The release_lockspace() context is sometimes calle
dlm: async freeing of lockspace resources
This patch handles freeing of lockspace resources asynchronously besides the release_lockspace() context. The release_lockspace() context is sometimes called in a time critical context, e.g. umount syscall. Most every user space init system will timeout if it takes too long. To reduce the potential waiting time we deregister in release_lockspace() the lockspace from the DLM subsystem and do the actual releasing of lockspace resource in a worker of a workqueue following recommendation of:
https://lore.kernel.org/all/[email protected]/T/#u
as flushing of system workqueues are not allowed. The most time to release the DLM resources are spent to release the data structures "ls->ls_lkbxa" and "ls->ls_rsbtbl" as they iterate over each entries and those data structures can contain millions of entries. This patch handles for now only freeing of those data structures as those operations are the most reason why release_lockspace() blocking of being returned.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
8a4cf500 |
| 02-Aug-2024 |
Alexander Aring <[email protected]> |
dlm: drop kobject release callback handling
This patch removes the releasing of the "struct dlm ls" resource out of the kobject handling. Instead we run kfree() after kobject_put() of the lockspace
dlm: drop kobject release callback handling
This patch removes the releasing of the "struct dlm ls" resource out of the kobject handling. Instead we run kfree() after kobject_put() of the lockspace kobject structure that should always being the last put call. This prepares to split the releasing of all lockspace resources asynchronously in the background and just deregister everything in release_lockspace().
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4 |
|
| #
79ced51e |
| 12-Jun-2024 |
Alexander Aring <[email protected]> |
dlm: remove DLM_LSFL_SOFTIRQ from exflags
The DLM rcom handling has a check that all exflags are the same for the whole lockspace membership nodes. There are some flags that requires such handling,
dlm: remove DLM_LSFL_SOFTIRQ from exflags
The DLM rcom handling has a check that all exflags are the same for the whole lockspace membership nodes. There are some flags that requires such handling, however DLM_LSFL_SOFTIRQ does not require this handling and it should be backwards compatibility with other lockspaces that does not set this flag.
Fixes: f328a26eeb53 ("dlm: introduce DLM_LSFL_SOFTIRQ_SAFE") Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc3 |
|
| #
68bde2a6 |
| 03-Jun-2024 |
Alexander Aring <[email protected]> |
dlm: implement LSFL_SOFTIRQ_SAFE
When a lockspace user allows it, run callback functions directly from softirq context, instead of queueing callbacks to be run from the dlm_callback workqueue contex
dlm: implement LSFL_SOFTIRQ_SAFE
When a lockspace user allows it, run callback functions directly from softirq context, instead of queueing callbacks to be run from the dlm_callback workqueue context.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
f328a26e |
| 03-Jun-2024 |
Alexander Aring <[email protected]> |
dlm: introduce DLM_LSFL_SOFTIRQ_SAFE
Introduce a new external lockspace flag DLM_LSFL_SOFTIRQ_SAFE. A lockspace user will set this flag if it can handle dlm running the callback functions from soft
dlm: introduce DLM_LSFL_SOFTIRQ_SAFE
Introduce a new external lockspace flag DLM_LSFL_SOFTIRQ_SAFE. A lockspace user will set this flag if it can handle dlm running the callback functions from softirq context. When not set, dlm will continue to run callback functions from the dlm_callback workqueue. The new lockspace flag cannot be used for user space lockspaces, so a uapi placeholder definition is used for the new flag value.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
d3d85e9a |
| 03-Jun-2024 |
Alexander Aring <[email protected]> |
dlm: use LSFL_FS to check for kernel lockspace
The existing external lockspace flag DLM_LSFL_FS is now also saved as an internal flag LSFL_FS, so it can be checked from other code locations which wa
dlm: use LSFL_FS to check for kernel lockspace
The existing external lockspace flag DLM_LSFL_FS is now also saved as an internal flag LSFL_FS, so it can be checked from other code locations which want to know if a lockspace is used from the kernel or user space.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
4f5957a9 |
| 10-Jun-2024 |
David Teigland <[email protected]> |
dlm: change list and timer names
The old terminology of "toss" and "keep" is no longer an accurate description of the rsb states and lists, so change the names to "inactive" and "active". The old n
dlm: change list and timer names
The old terminology of "toss" and "keep" is no longer an accurate description of the rsb states and lists, so change the names to "inactive" and "active". The old names had also been copied into the scanning code, which is changed back to use the "scan" name.
- "active" rsb structs have lkb's attached, and are ref counted. - "inactive" rsb structs have no lkb's attached, are not ref counted. - "scan" list is for rsb's that can be freed after a timeout period. - "slow" lists are for infrequent iterations through active or inactive rsb structs. - inactive rsb structs that are directory records will not be put on the scan list, since they are not freed based on timeouts. - inactive rsb structs that are not directory records will be put on the scan list to be freed, since they are not longer needed.
Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc2 |
|
| #
fa0b54f1 |
| 28-May-2024 |
Alexander Aring <[email protected]> |
dlm: move recover idr to xarray datastructure
According to kdoc idr is deprecated and xarrays should be used nowadays. This patch is moving the recover idr implementation to xarray datastructure.
S
dlm: move recover idr to xarray datastructure
According to kdoc idr is deprecated and xarrays should be used nowadays. This patch is moving the recover idr implementation to xarray datastructure.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
f455eb84 |
| 28-May-2024 |
Alexander Aring <[email protected]> |
dlm: move lkb idr to xarray datastructure
According to kernel doc idr is deprecated and xarrays should be used nowadays. This patch is moving the lkb idr implementation to xarrays.
Signed-off-by: A
dlm: move lkb idr to xarray datastructure
According to kernel doc idr is deprecated and xarrays should be used nowadays. This patch is moving the lkb idr implementation to xarrays.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
1ffefc19 |
| 28-May-2024 |
Alexander Aring <[email protected]> |
dlm: drop own rsb pre allocation mechanism
This patch drops the own written rsb pre allocation mechanism as this is already done by using kmem caches, we don't need another layer on top of that to r
dlm: drop own rsb pre allocation mechanism
This patch drops the own written rsb pre allocation mechanism as this is already done by using kmem caches, we don't need another layer on top of that to running some pre allocation scheme.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
4db41bf4 |
| 28-May-2024 |
Alexander Aring <[email protected]> |
dlm: remove ls_local_handle from struct dlm_ls
This patch removes ls_local_handle from struct dlm_ls as it stores the ls pointer of the top level structure itesef and this isn't necessary. There is
dlm: remove ls_local_handle from struct dlm_ls
This patch removes ls_local_handle from struct dlm_ls as it stores the ls pointer of the top level structure itesef and this isn't necessary. There is a lookup functionality to lookup the lockspace in dlm_find_lockspace_local() but the given input parameter is the pointer already. This might be more safe to lookup a lockspace but given a wrong lockspace pointer is a bug in the code and we save the additional lookup here. The dlm_ls structure can be still hidden by using dlm_lockspace_t handle pointer.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
b88b249b |
| 28-May-2024 |
Alexander Aring <[email protected]> |
dlm: remove scand leftovers
This patch removes some leftover related code from dlm_scand that was dropped in commit b1f2381c1a8d ("dlm: drop dlm_scand kthread and use timers").
Signed-off-by: Alexa
dlm: remove scand leftovers
This patch removes some leftover related code from dlm_scand that was dropped in commit b1f2381c1a8d ("dlm: drop dlm_scand kthread and use timers").
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6 |
|
| #
7b72ab2c |
| 23-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: return -ENOMEM if ls_recover_buf fails
This patch fixes to return -ENOMEM in case of an allocation failure that was forgotten to change in commit 6c648035cbe7 ("dlm: switch to use rhashtable fo
dlm: return -ENOMEM if ls_recover_buf fails
This patch fixes to return -ENOMEM in case of an allocation failure that was forgotten to change in commit 6c648035cbe7 ("dlm: switch to use rhashtable for rsbs").
Reported-by: kernel test robot <[email protected]> Reported-by: Dan Carpenter <[email protected]> Closes: https://lore.kernel.org/r/[email protected]/ Fixes: 6c648035cbe7 ("dlm: switch to use rhashtable for rsbs") Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc5 |
|
| #
7b012732 |
| 17-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: fix sleep in atomic context
This patch changes the orphans mutex to a spinlock since commit c288745f1d4a ("dlm: avoid blocking receive at the end of recovery") is using a rwlock_t to lock the D
dlm: fix sleep in atomic context
This patch changes the orphans mutex to a spinlock since commit c288745f1d4a ("dlm: avoid blocking receive at the end of recovery") is using a rwlock_t to lock the DLM message receive path and do_purge() can be called while this lock is held that forbids to sleep.
We need to use spin_lock_bh() because also a user context that calls dlm_user_purge() can call do_purge() and since commit 92d59adfaf71 ("dlm: do message processing in softirq context") the DLM message receive path is done under softirq context.
Fixes: c288745f1d4a ("dlm: avoid blocking receive at the end of recovery") Reported-by: Dan Carpenter <[email protected]> Closes: https://lore.kernel.org/gfs2/[email protected]/ Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
15fd7e55 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: use rwlock for lkbidr
Convert the lock for lkbidr to an rwlock. Most idr lookups will use the read lock.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <te
dlm: use rwlock for lkbidr
Convert the lock for lkbidr to an rwlock. Most idr lookups will use the read lock.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
e9131359 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: use rwlock for rsb hash table
The conversion to rhashtable introduced a hash table lock per lockspace, in place of per bucket locks. To make this more scalable, switch to using a rwlock for ha
dlm: use rwlock for rsb hash table
The conversion to rhashtable introduced a hash table lock per lockspace, in place of per bucket locks. To make this more scalable, switch to using a rwlock for hash table access. The common case fast path uses it as a read lock.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
b1f2381c |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: drop dlm_scand kthread and use timers
Currently the scand kthread acts like a garbage collection for expired rsbs on toss list, to clean them up after a certain timeout. It triggers every coupl
dlm: drop dlm_scand kthread and use timers
Currently the scand kthread acts like a garbage collection for expired rsbs on toss list, to clean them up after a certain timeout. It triggers every couple of seconds and iterates over the toss list while holding ls_rsbtbl_lock for the whole hash bucket iteration.
To reduce the amount of time holding ls_rsbtbl_lock, we now handle the disposal of expired rsbs using a per-lockspace timer that expires for the earliest tossed rsb on the lockspace toss queue. This toss queue is ordered according to the rsb res_toss_time with the earliest tossed rsb as the first entry. The toss timer will only trylock() necessary locks, since it is low priority garbage collection, and will rearm the timer if trylock() fails. If the timer function does not find any expired rsb's, it rearms the timer with the next earliest expired rsb.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
6c648035 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: switch to use rhashtable for rsbs
Replace our own hash table with the more advanced rhashtable for keeping rsb structs.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: Davi
dlm: switch to use rhashtable for rsbs
Replace our own hash table with the more advanced rhashtable for keeping rsb structs.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
93a693d1 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: add rsb lists for iteration
To prepare for using rhashtable, add two rsb lists for iterating through rsb's in two uncommon cases where this is necesssary: - when dumping rsb state from debugfs,
dlm: add rsb lists for iteration
To prepare for using rhashtable, add two rsb lists for iterating through rsb's in two uncommon cases where this is necesssary: - when dumping rsb state from debugfs, now using seq_list. - when looking at all rsb's during recovery.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
2d903540 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: merge toss and keep hash table lists into one list
There are several places where lock processing can perform two hash table lookups, first in the "keep" list, and if not found, in the "toss" l
dlm: merge toss and keep hash table lists into one list
There are several places where lock processing can perform two hash table lookups, first in the "keep" list, and if not found, in the "toss" list. This patch introduces a new rsb state flag "RSB_TOSS" to represent the difference between the state of being on keep vs toss list, so that the two lists can be combined. This avoids cases of two lookups.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
dcdaad05 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: change to single hashtable lock
Prepare to replace our own hash table with rhashtable by replacing the per-bucket locks in our own hash table with a single lock.
Signed-off-by: Alexander Aring
dlm: change to single hashtable lock
Prepare to replace our own hash table with rhashtable by replacing the per-bucket locks in our own hash table with a single lock.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
700b0480 |
| 15-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: increment ls_count for dlm_scand
Increment the ls_count value while dlm_scand is processing a lockspace so that release_lockspace()/remove_lockspace() will wait for dlm_scand to finish.
Signed
dlm: increment ls_count for dlm_scand
Increment the ls_count value while dlm_scand is processing a lockspace so that release_lockspace()/remove_lockspace() will wait for dlm_scand to finish.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc4, v6.9-rc3 |
|
| #
578acf9a |
| 02-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: use spin_lock_bh for message processing
Use spin_lock_bh for all spinlocks involved in message processing, in preparation for softirq message processing. DLM lock requests from user space invo
dlm: use spin_lock_bh for message processing
Use spin_lock_bh for all spinlocks involved in message processing, in preparation for softirq message processing. DLM lock requests from user space involve dlm processing in user context, in addition to the standard kernel context, necessitating bh variants.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|
| #
d52c9b8f |
| 02-Apr-2024 |
Alexander Aring <[email protected]> |
dlm: convert ls_recv_active from rw_semaphore to rwlock
Convert ls_recv_active rw_semaphore to an rwlock to avoid sleeping, in preparation for softirq message processing.
Signed-off-by: Alexander A
dlm: convert ls_recv_active from rw_semaphore to rwlock
Convert ls_recv_active rw_semaphore to an rwlock to avoid sleeping, in preparation for softirq message processing.
Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David Teigland <[email protected]>
show more ...
|