|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13 |
|
| #
3c7fd942 |
| 16-Jan-2025 |
Suren Baghdasaryan <[email protected]> |
seqlock: add missing parameter documentation for raw_seqcount_try_begin()
Add missing documentation for raw_seqcount_try_begin() start parameter.
Link: https://lkml.kernel.org/r/20250116182730.8014
seqlock: add missing parameter documentation for raw_seqcount_try_begin()
Add missing documentation for raw_seqcount_try_begin() start parameter.
Link: https://lkml.kernel.org/r/[email protected] Fixes: dba4761a3e40 ("seqlock: add raw_seqcount_try_begin") Reported-by: Stephen Rothwell <[email protected]> Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Suren Baghdasaryan <[email protected]> Acked-by: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Peter Zijlstra (Intel) <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1 |
|
| #
dba4761a |
| 22-Nov-2024 |
Suren Baghdasaryan <[email protected]> |
seqlock: add raw_seqcount_try_begin
Add raw_seqcount_try_begin() to opens a read critical section of the given seqcount_t if the counter is even. This enables eliding the critical section entirely i
seqlock: add raw_seqcount_try_begin
Add raw_seqcount_try_begin() to opens a read critical section of the given seqcount_t if the counter is even. This enables eliding the critical section entirely if the counter is odd, instead of doing the speculation knowing it will fail.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Suren Baghdasaryan <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Cc: Christian Brauner <[email protected]> Cc: David Howells <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Hillf Danton <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jann Horn <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Lorenzo Stoakes <[email protected]> Cc: Mateusz Guzik <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Pasha Tatashin <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Xu <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Sourav Panda <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Wei Yang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4 |
|
| #
c7175957 |
| 27-Jul-2023 |
Mateusz Guzik <[email protected]> |
seqlock: annotate spinning as unlikely() in __read_seqcount_begin
Annotation already used to be there, but got lost in 52ac39e5db5148f7 ("seqlock: seqcount_t: Implement all read APIs as statement ex
seqlock: annotate spinning as unlikely() in __read_seqcount_begin
Annotation already used to be there, but got lost in 52ac39e5db5148f7 ("seqlock: seqcount_t: Implement all read APIs as statement expressions"). Does not look like it was intentional.
Without it gcc 12 decides to compile the following in path_init: nd->m_seq = __read_seqcount_begin(&mount_lock.seqcount); nd->r_seq = __read_seqcount_begin(&rename_lock.seqcount);
into 2 cases of conditional jumps forward if the value is even, aka branch prediction miss by default in the common case on x86-64.
With the patch jumps are only for odd values.
before: [snip] mov 0x104fe96(%rip),%eax # 0xffffffff82409680 <mount_lock> test $0x1,%al je 0xffffffff813b97fa <path_init+122> pause mov 0x104fe8a(%rip),%eax # 0xffffffff82409680 <mount_lock> test $0x1,%al jne 0xffffffff813b97ee <path_init+110> mov %eax,0x48(%rbx) mov 0x104fdfd(%rip),%eax # 0xffffffff82409600 <rename_lock> test $0x1,%al je 0xffffffff813b9813 <path_init+147> pause mov 0x104fdf1(%rip),%eax # 0xffffffff82409600 <rename_lock> test $0x1,%al jne 0xffffffff813b9807 <path_init+135> [/snip]
after: [snip] mov 0x104fec6(%rip),%eax # 0xffffffff82409680 <mount_lock> test $0x1,%al jne 0xffffffff813b99af <path_init+607> mov %eax,0x48(%rbx) mov 0x104fe35(%rip),%eax # 0xffffffff82409600 <rename_lock> test $0x1,%al jne 0xffffffff813b999d <path_init+589> [/snip]
Interestingly .text gets slightly smaller (as reported by size(1)): before: 20702563 after: 20702429
Signed-off-by: Mateusz Guzik <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
96450ead |
| 22-Nov-2024 |
Suren Baghdasaryan <[email protected]> |
seqlock: add raw_seqcount_try_begin
Add raw_seqcount_try_begin() to opens a read critical section of the given seqcount_t if the counter is even. This enables eliding the critical section entirely i
seqlock: add raw_seqcount_try_begin
Add raw_seqcount_try_begin() to opens a read critical section of the given seqcount_t if the counter is even. This enables eliding the critical section entirely if the counter is odd, instead of doing the speculation knowing it will fail.
Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Suren Baghdasaryan <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
183ec5f2 |
| 04-Nov-2024 |
Marco Elver <[email protected]> |
kcsan, seqlock: Fix incorrect assumption in read_seqbegin()
During testing of the preceding changes, I noticed that in some cases, current->kcsan_ctx.in_flat_atomic remained true until task exit. Th
kcsan, seqlock: Fix incorrect assumption in read_seqbegin()
During testing of the preceding changes, I noticed that in some cases, current->kcsan_ctx.in_flat_atomic remained true until task exit. This is obviously wrong, because _all_ accesses for the given task will be treated as atomic, resulting in false negatives i.e. missed data races.
Debugging led to fs/dcache.c, where we can see this usage of seqlock:
struct dentry *d_lookup(const struct dentry *parent, const struct qstr *name) { struct dentry *dentry; unsigned seq;
do { seq = read_seqbegin(&rename_lock); dentry = __d_lookup(parent, name); if (dentry) break; } while (read_seqretry(&rename_lock, seq)); [...]
As can be seen, read_seqretry() is never called if dentry != NULL; consequently, current->kcsan_ctx.in_flat_atomic will never be reset to false by read_seqretry().
Give up on the wrong assumption of "assume closing read_seqretry()", and rely on the already-present annotations in read_seqcount_begin/retry().
Fixes: 88ecd153be95 ("seqlock, kcsan: Add annotations for KCSAN") Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
5c1806c4 |
| 04-Nov-2024 |
Marco Elver <[email protected]> |
kcsan, seqlock: Support seqcount_latch_t
While fuzzing an arm64 kernel, Alexander Potapenko reported:
| BUG: KCSAN: data-race in ktime_get_mono_fast_ns / timekeeping_update | | write to 0xffffffc08
kcsan, seqlock: Support seqcount_latch_t
While fuzzing an arm64 kernel, Alexander Potapenko reported:
| BUG: KCSAN: data-race in ktime_get_mono_fast_ns / timekeeping_update | | write to 0xffffffc082e74248 of 56 bytes by interrupt on cpu 0: | update_fast_timekeeper kernel/time/timekeeping.c:430 [inline] | timekeeping_update+0x1d8/0x2d8 kernel/time/timekeeping.c:768 | timekeeping_advance+0x9e8/0xb78 kernel/time/timekeeping.c:2344 | update_wall_time+0x18/0x38 kernel/time/timekeeping.c:2360 | [...] | | read to 0xffffffc082e74258 of 8 bytes by task 5260 on cpu 1: | __ktime_get_fast_ns kernel/time/timekeeping.c:372 [inline] | ktime_get_mono_fast_ns+0x88/0x174 kernel/time/timekeeping.c:489 | init_srcu_struct_fields+0x40c/0x530 kernel/rcu/srcutree.c:263 | init_srcu_struct+0x14/0x20 kernel/rcu/srcutree.c:311 | [...] | | value changed: 0x000002f875d33266 -> 0x000002f877416866 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 1 UID: 0 PID: 5260 Comm: syz.2.7483 Not tainted 6.12.0-rc3-dirty #78
This is a false positive data race between a seqcount latch writer and a reader accessing stale data. Since its introduction, KCSAN has never understood the seqcount_latch interface (due to being unannotated).
Unlike the regular seqlock interface, the seqcount_latch interface for latch writers never has had a well-defined critical section, making it difficult to teach tooling where the critical section starts and ends.
Introduce an instrumentable (non-raw) seqcount_latch interface, with which we can clearly denote writer critical sections. This both helps readability and tooling like KCSAN to understand when the writer is done updating all latch copies.
Fixes: 88ecd153be95 ("seqlock, kcsan: Add annotations for KCSAN") Reported-by: Alexander Potapenko <[email protected]> Co-developed-by: "Peter Zijlstra (Intel)" <[email protected]> Signed-off-by: "Peter Zijlstra (Intel)" <[email protected]> Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
d0dd066a |
| 12-Jun-2024 |
Christoph Lameter (Ampere) <[email protected]> |
seqcount: replace smp_rmb() in read_seqcount() with load acquire
Many architectures support load acquire which can replace a memory barrier and save some cycles.
A typical sequence
do { seq = r
seqcount: replace smp_rmb() in read_seqcount() with load acquire
Many architectures support load acquire which can replace a memory barrier and save some cycles.
A typical sequence
do { seq = read_seqcount_begin(&s); <something> } while (read_seqcount_retry(&s, seq);
requires 13 cycles on an N1 Neoverse arm64 core (Ampere Altra, to be specific) for an empty loop. Two read memory barriers are needed. One for each of the seqcount_* functions.
We can replace the first read barrier with a load acquire of the seqcount which saves us one barrier.
On the Altra doing so reduces the cycle count from 13 to 8.
According to ARM, this is a general improvement for the ARM64 architecture and not specific to a certain processor.
See
https://developer.arm.com/documentation/102336/0100/Load-Acquire-and-Store-Release-instructions
"Weaker ordering requirements that are imposed by Load-Acquire and Store-Release instructions allow for micro-architectural optimizations, which could reduce some of the performance impacts that are otherwise imposed by an explicit memory barrier.
If the ordering requirement is satisfied using either a Load-Acquire or Store-Release, then it would be preferable to use these instructions instead of a DMB"
[ NOTE! This is my original minimal patch that unconditionally switches over to using smp_load_acquire(), instead of the much more involved and subtle patch that Christoph Lameter wrote that made it conditional.
But Christoph gets authorship credit because I had initially thought that we needed the more complex model, and Christoph ran with it it and did the work. Only after looking at code generation for all the relevant architectures, did I come to the conclusion that nobody actually really needs the old "smp_rmb()" model.
Even architectures without load-acquire support generally do as well or better with smp_load_acquire().
So credit to Christoph, but if this then causes issues on other architectures, put the blame solidly on me.
Also note as part of the ruthless simplification, this gets rid of the overly subtle optimization where some code uses a non-barrier version of the sequence count (see the __read_seqcount_begin() users in fs/namei.c). They then play games with their own barriers and/or with nested sequence counts.
Those optimizations are literally meaningless on x86, and questionable elsewhere. If somebody can show that they matter, we need to re-do them more cleanly than "use an internal helper". - Linus ]
Signed-off-by: Christoph Lameter (Ampere) <[email protected]> Link: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
f038cc13 |
| 11-Dec-2023 |
Kent Overstreet <[email protected]> |
locking/seqlock: Split out seqlock_types.h
Trimming down sched.h dependencies: we don't want to include more than the base types.
Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <mingo@re
locking/seqlock: Split out seqlock_types.h
Trimming down sched.h dependencies: we don't want to include more than the base types.
Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Signed-off-by: Kent Overstreet <[email protected]>
show more ...
|
| #
184fdf9f |
| 17-Oct-2023 |
Cuda-Chen <[email protected]> |
locking/seqlock: Fix grammar in comment
The "neither writes before and after ..." for the description of do_write_seqcount_end() should be "neither writes before nor after".
Signed-off-by: Cuda-Che
locking/seqlock: Fix grammar in comment
The "neither writes before and after ..." for the description of do_write_seqcount_end() should be "neither writes before nor after".
Signed-off-by: Cuda-Chen <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
886ee55e |
| 13-Oct-2023 |
Ingo Molnar <[email protected]> |
locking/seqlock: Propagate 'const' pointers within read-only methods, remove forced type casts
Currently __seqprop_ptr() is an inline function that must chose to either use 'const' or non-const seqc
locking/seqlock: Propagate 'const' pointers within read-only methods, remove forced type casts
Currently __seqprop_ptr() is an inline function that must chose to either use 'const' or non-const seqcount related pointers - but this results in the undesirable loss of 'const' propagation, via a forced type cast.
The easiest solution would be to turn the pointer wrappers into macros that pass through whatever type is passed to them - but the clever maze of seqlock API instantiation macros relies on the GCC CPP '##' macro extension, which isn't recursive, so inline functions must be used here.
So create two wrapper variants instead: 'ptr' and 'const_ptr', and pick the right one for the codepaths that are const: read_seqcount_begin() and read_seqcount_retry().
This cleans up type handling and allows the removal of all type forcing.
No change in functionality.
Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Oleg Nesterov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Waiman Long <[email protected]> Cc: Will Deacon <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Paul E. McKenney <[email protected]>
show more ...
|
| #
e6115c6f |
| 12-Oct-2023 |
Oleg Nesterov <[email protected]> |
locking/seqlock: Change __seqprop() to return the function pointer
This simplifies the macro and makes it easy to add the new seqprop's with 2 or more args.
Plus this way we do not lose the type in
locking/seqlock: Change __seqprop() to return the function pointer
This simplifies the macro and makes it easy to add the new seqprop's with 2 or more args.
Plus this way we do not lose the type info, the (void*) type cast is no longer needed.
And the latter reveals the problem: a lot of seqcount_t helpers pass the "const seqcount_t *s" argument to __seqprop_ptr(seqcount_t *s) but (before this patch) "(void *)(s)" masked the problem.
So this patch changes __seqprop_ptr() and __seqprop_##lockname##_ptr() to accept the "const LOCKNAME *s" argument. This is not nice either, they need to drop the constness on return because these helpers are used by both the readers and writers, but at least it is clear what's going on.
Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Waiman Long <[email protected]> Cc: Will Deacon <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
f995443f |
| 12-Oct-2023 |
Oleg Nesterov <[email protected]> |
locking/seqlock: Simplify SEQCOUNT_LOCKNAME()
1. Kill the "lockmember" argument. It is always s->lock plus __seqprop_##lockname##_sequence() already uses s->lock and ignores "lockmember".
2.
locking/seqlock: Simplify SEQCOUNT_LOCKNAME()
1. Kill the "lockmember" argument. It is always s->lock plus __seqprop_##lockname##_sequence() already uses s->lock and ignores "lockmember".
2. Kill the "lock_acquire" argument. __seqprop_##lockname##_sequence() can use the same "lockbase" prefix for _lock and _unlock.
Apart from line numbers, gcc -E outputs the same code.
Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Waiman Long <[email protected]> Cc: Will Deacon <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.5-rc3 |
|
| #
0cff993e |
| 20-Jul-2023 |
[email protected] <[email protected]> |
locking/seqlock: Fix typo in comment
s/the the /the
[ mingo: Cleaned up the changelog. ]
Signed-off-by: Zizhen Pang <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link:
locking/seqlock: Fix typo in comment
s/the the /the
[ mingo: Cleaned up the changelog. ]
Signed-off-by: Zizhen Pang <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
41b43b6c |
| 20-Sep-2023 |
Sebastian Andrzej Siewior <[email protected]> |
locking/seqlock: Do the lockdep annotation before locking in do_write_seqcount_begin_nested()
It was brought up by Tetsuo that the following sequence:
write_seqlock_irqsave() printk_deferred_
locking/seqlock: Do the lockdep annotation before locking in do_write_seqcount_begin_nested()
It was brought up by Tetsuo that the following sequence:
write_seqlock_irqsave() printk_deferred_enter()
could lead to a deadlock if the lockdep annotation within write_seqlock_irqsave() triggers.
The problem is that the sequence counter is incremented before the lockdep annotation is performed. The lockdep splat would then attempt to invoke printk() but the reader side, of the same seqcount, could have a tty_port::lock acquired waiting for the sequence number to become even again.
The other lockdep annotations come before the actual locking because "we want to see the locking error before it happens". There is no reason why seqcount should be different here.
Do the lockdep annotation first then perform the locking operation (the sequence increment).
Fixes: 1ca7d67cf5d5a ("seqcount: Add lockdep functionality to seqcount/seqlock structures") Reported-by: Tetsuo Handa <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
Closes: https://lore.kernel.org/[email protected]
show more ...
|
|
Revision tags: v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3 |
|
| #
d16317de |
| 19-May-2023 |
Peter Zijlstra <[email protected]> |
seqlock/latch: Provide raw_read_seqcount_latch_retry()
The read side of seqcount_latch consists of:
do { seq = raw_read_seqcount_latch(&latch->seq); ... } while (read_seqcount_latch_ret
seqlock/latch: Provide raw_read_seqcount_latch_retry()
The read side of seqcount_latch consists of:
do { seq = raw_read_seqcount_latch(&latch->seq); ... } while (read_seqcount_latch_retry(&latch->seq, seq));
which is asymmetric in the raw_ department, and sure enough, read_seqcount_latch_retry() includes (explicit) instrumentation where raw_read_seqcount_latch() does not.
This inconsistency becomes a problem when trying to use it from noinstr code. As such, fix it by renaming and re-implementing raw_read_seqcount_latch_retry() without the instrumentation.
Specifically the instrumentation in question is kcsan_atomic_next(0) in do___read_seqcount_retry(). Loosing this annotation is not a problem because raw_read_seqcount_latch() does not pass through kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX).
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Tested-by: Michael Kelley <[email protected]> # Hyper-V Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2 |
|
| #
e84815cb |
| 07-Apr-2022 |
Christian König <[email protected]> |
seqlock: drop seqcount_ww_mutex_t
Daniel pointed out that this series removes the last user of seqcount_ww_mutex_t, so let's drop this.
Signed-off-by: Christian König <[email protected]> Cc:
seqlock: drop seqcount_ww_mutex_t
Daniel pointed out that this series removes the last user of seqcount_ww_mutex_t, so let's drop this.
Signed-off-by: Christian König <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Waiman Long <[email protected]> Cc: Boqun Feng <[email protected]> Cc: [email protected] Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Daniel Vetter <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
show more ...
|
|
Revision tags: v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5 |
|
| #
149876d9 |
| 05-Jun-2021 |
Huilong Deng <[email protected]> |
seqlock: Remove trailing semicolon in macros
Macros should not use a trailing semicolon.
Signed-off-by: Huilong Deng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead
seqlock: Remove trailing semicolon in macros
Macros should not use a trailing semicolon.
Signed-off-by: Huilong Deng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3 |
|
| #
4817a52b |
| 09-Mar-2021 |
Peter Zijlstra <[email protected]> |
seqlock,lockdep: Fix seqcount_latch_init()
seqcount_init() must be a macro in order to preserve the static variable that is used for the lockdep key. Don't then wrap it in an inline function, which
seqlock,lockdep: Fix seqcount_latch_init()
seqcount_init() must be a macro in order to preserve the static variable that is used for the lockdep key. Don't then wrap it in an inline function, which destroys that.
Luckily there aren't many users of this function, but fix it before it becomes a problem.
Fixes: 80793c3471d9 ("seqlock: Introduce seqcount_latch_t") Reported-by: Eric Dumazet <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7 |
|
| #
cb262935 |
| 06-Dec-2020 |
Ahmed S. Darwish <[email protected]> |
seqlock: kernel-doc: Specify when preemption is automatically altered
The kernel-doc annotations for sequence counters write side functions are incomplete: they do not specify when preemption is aut
seqlock: kernel-doc: Specify when preemption is automatically altered
The kernel-doc annotations for sequence counters write side functions are incomplete: they do not specify when preemption is automatically disabled and re-enabled.
This has confused a number of call-site developers. Fix it.
Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/CAHk-=wikhGExmprXgaW+MVXG1zsGpztBbVwOb23vetk41EtTBQ@mail.gmail.com
show more ...
|
| #
66bcfcdf |
| 06-Dec-2020 |
Ahmed S. Darwish <[email protected]> |
seqlock: Prefix internal seqcount_t-only macros with a "do_"
When the seqcount_LOCKNAME_t group of data types were introduced, two classes of seqlock.h sequence counter macros were added:
- An ex
seqlock: Prefix internal seqcount_t-only macros with a "do_"
When the seqcount_LOCKNAME_t group of data types were introduced, two classes of seqlock.h sequence counter macros were added:
- An external public API which can either take a plain seqcount_t or any of the seqcount_LOCKNAME_t variants.
- An internal API which takes only a plain seqcount_t.
To distinguish between the two groups, the "*_seqcount_t_*" pattern was used for the latter. This confused a number of mm/ call-site developers, and Linus also commented that it was not a standard practice for marking seqlock.h internal APIs.
Distinguish the latter group of macros by prefixing a "do_".
Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/CAHk-=wikhGExmprXgaW+MVXG1zsGpztBbVwOb23vetk41EtTBQ@mail.gmail.com
show more ...
|
|
Revision tags: v5.10-rc6, v5.10-rc5, v5.10-rc4 |
|
| #
ab440b2c |
| 10-Nov-2020 |
Peter Zijlstra <[email protected]> |
seqlock: Rename __seqprop() users
More consistent naming should make it easier to untangle the _Generic token pasting maze called __seqprop().
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradea
seqlock: Rename __seqprop() users
More consistent naming should make it easier to untangle the _Generic token pasting maze called __seqprop().
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.10-rc3, v5.10-rc2 |
|
| #
a07c4531 |
| 26-Oct-2020 |
Arnd Bergmann <[email protected]> |
seqlock: avoid -Wshadow warnings
When building with W=2, there is a flood of warnings about the seqlock macros shadowing local variables:
19806 linux/seqlock.h:331:11: warning: declaration of 'se
seqlock: avoid -Wshadow warnings
When building with W=2, there is a flood of warnings about the seqlock macros shadowing local variables:
19806 linux/seqlock.h:331:11: warning: declaration of 'seq' shadows a previous local [-Wshadow] 48 linux/seqlock.h:348:11: warning: declaration of 'seq' shadows a previous local [-Wshadow] 8 linux/seqlock.h:379:11: warning: declaration of 'seq' shadows a previous local [-Wshadow]
Prefix the local variables to make the warning useful elsewhere again.
Fixes: 52ac39e5db51 ("seqlock: seqcount_t: Implement all read APIs as statement expressions") Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.10-rc1 |
|
| #
ed3e4537 |
| 13-Oct-2020 |
Mauro Carvalho Chehab <[email protected]> |
locking/seqlocks: Fix kernel-doc warnings
Right now, seqlock.h produces kernel-doc warnings:
./include/linux/seqlock.h:181: error: Cannot parse typedef!
Convert it to a plain comment to avoid con
locking/seqlocks: Fix kernel-doc warnings
Right now, seqlock.h produces kernel-doc warnings:
./include/linux/seqlock.h:181: error: Cannot parse typedef!
Convert it to a plain comment to avoid confusing kernel-doc.
Fixes: a8772dccb2ec ("seqlock: Fold seqcount_LOCKNAME_t definition") Signed-off-by: Mauro Carvalho Chehab <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/a59144cdaadf7fdf1fe5d55d0e1575abbf1c0cb3.1602590106.git.mchehab+huawei@kernel.org
show more ...
|
|
Revision tags: v5.9, v5.9-rc8, v5.9-rc7 |
|
| #
24a18772 |
| 24-Sep-2020 |
Sebastian Andrzej Siewior <[email protected]> |
locking/seqlock: Tweak DEFINE_SEQLOCK() kernel doc
ctags creates a warning: |ctags: Warning: include/linux/seqlock.h:738: null expansion of name pattern "\2"
The DEFINE_SEQLOCK() macro is passed to
locking/seqlock: Tweak DEFINE_SEQLOCK() kernel doc
ctags creates a warning: |ctags: Warning: include/linux/seqlock.h:738: null expansion of name pattern "\2"
The DEFINE_SEQLOCK() macro is passed to ctags and being told to expect an argument.
Add a dummy argument to keep ctags quiet.
Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.9-rc6 |
|
| #
267580db |
| 15-Sep-2020 |
[email protected] <[email protected]> |
seqlock: Unbreak lockdep
seqcount_LOCKNAME_init() needs to be a macro due to the lockdep annotation in seqcount_init(). Since a macro cannot define another macro, we need to effectively revert commi
seqlock: Unbreak lockdep
seqcount_LOCKNAME_init() needs to be a macro due to the lockdep annotation in seqcount_init(). Since a macro cannot define another macro, we need to effectively revert commit: e4e9ab3f9f91 ("seqlock: Fold seqcount_LOCKNAME_init() definition").
Fixes: e4e9ab3f9f91 ("seqlock: Fold seqcount_LOCKNAME_init() definition") Reported-by: Qian Cai <[email protected]> Debugged-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Qian Cai <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|