|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2 |
|
| #
8a3e5975 |
| 11-Sep-2023 |
NeilBrown <[email protected]> |
llist: add llist_del_first_this()
llist_del_first_this() deletes a specific entry from an llist, providing it is at the head of the list. Multiple threads can call this concurrently providing they
llist: add llist_del_first_this()
llist_del_first_this() deletes a specific entry from an llist, providing it is at the head of the list. Multiple threads can call this concurrently providing they each offer a different entry.
This can be uses for a set of worker threads which are on the llist when they are idle. The head can always be woken, and when it is woken it can remove itself, and possibly wake the next if there is an excess of work to do.
Signed-off-by: NeilBrown <[email protected]> Signed-off-by: Chuck Lever <[email protected]>
show more ...
|
| #
d6b3358a |
| 11-Sep-2023 |
NeilBrown <[email protected]> |
llist: add interface to check if a node is on a list.
With list.h lists, it is easy to test if a node is on a list, providing it was initialised and that it is removed with list_del_init().
This pa
llist: add interface to check if a node is on a list.
With list.h lists, it is easy to test if a node is on a list, providing it was initialised and that it is removed with list_del_init().
This patch provides similar functionality for llist.h lists.
init_llist_node() marks a node as being not-on-any-list be setting the ->next pointer to the node itself. llist_on_list() tests if the node is on any list. llist_del_first_init() remove the first element from a llist, and marks it as being off-list.
Signed-off-by: NeilBrown <[email protected]> Signed-off-by: Chuck Lever <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1 |
|
| #
50b09d61 |
| 09-Nov-2021 |
Andy Shevchenko <[email protected]> |
include/linux/llist.h: replace kernel.h with the necessary inclusions
When kernel.h is used in the headers it adds a lot into dependency hell, especially when there are circular dependencies are inv
include/linux/llist.h: replace kernel.h with the necessary inclusions
When kernel.h is used in the headers it adds a lot into dependency hell, especially when there are circular dependencies are involved.
Replace kernel.h inclusion with the list of what is really being used.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Brendan Higgins <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Laurent Pinchart <[email protected]> Cc: Mauro Carvalho Chehab <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Sakari Ailus <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Thorsten Leemhuis <[email protected]> Cc: Waiman Long <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3 |
|
| #
476c5818 |
| 29-Aug-2020 |
Peter Zijlstra <[email protected]> |
llist: Add nonatomic __llist_add() and __llist_dell_all()
We'll use these in the new, lockless kretprobes code.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Masami Hi
llist: Add nonatomic __llist_add() and __llist_dell_all()
We'll use these in the new, lockless kretprobes code.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/159870619318.1229682.13027387548510028721.stgit@devnote2
show more ...
|
|
Revision tags: v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3 |
|
| #
45051539 |
| 29-May-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 333
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 333
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 136 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7 |
|
| #
6aa7de05 |
| 23-Oct-2017 |
Mark Rutland <[email protected]> |
locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
Please do not apply this to mainline directly, instead please re-run the coccinelle script sh
locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
Please do not apply this to mainline directly, instead please re-run the coccinelle script shown below and apply its output.
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in preference to ACCESS_ONCE(), and new code is expected to use one of the former. So far, there's been no reason to change most existing uses of ACCESS_ONCE(), as these aren't harmful, and changing them results in churn.
However, for some features, the read/write distinction is critical to correct operation. To distinguish these cases, separate read/write accessors must be used. This patch migrates (most) remaining ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following coccinelle script:
---- // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and // WRITE_ONCE()
// $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
virtual patch
@ depends on patch @ expression E1, E2; @@
- ACCESS_ONCE(E1) = E2 + WRITE_ONCE(E1, E2)
@ depends on patch @ expression E; @@
- ACCESS_ONCE(E) + READ_ONCE(E) ----
Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2 |
|
| #
beaec533 |
| 19-Jul-2017 |
Alexander Potapenko <[email protected]> |
llist: clang: introduce member_address_is_nonnull()
Currently llist_for_each_entry() and llist_for_each_entry_safe() iterate until &pos->member != NULL. But when building the kernel with Clang, the
llist: clang: introduce member_address_is_nonnull()
Currently llist_for_each_entry() and llist_for_each_entry_safe() iterate until &pos->member != NULL. But when building the kernel with Clang, the compiler assumes &pos->member cannot be NULL if the member's offset is greater than 0 (which would be equivalent to the object being non-contiguous in memory). Therefore the loop condition is always true, and the loops become infinite.
To work around this, introduce the member_address_is_nonnull() macro, which casts object pointer to uintptr_t, thus letting the member pointer to be NULL.
Signed-off-by: Alexander Potapenko <[email protected]> Tested-by: Sodagudi Prasad <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6, v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1 |
|
| #
d714893e |
| 12-May-2017 |
Byungchul Park <[email protected]> |
llist: Provide a safe version for llist_for_each()
Sometimes we have to dereference next field of llist node before entering loop becasue the node might be deleted or the next field might be modifie
llist: Provide a safe version for llist_for_each()
Sometimes we have to dereference next field of llist node before entering loop becasue the node might be deleted or the next field might be modified within the loop. So this adds the safe version of llist_for_each(), that is, llist_for_each_safe().
Signed-off-by: Byungchul Park <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Huang, Ying <[email protected]> Cc: <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8, v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1 |
|
| #
d78973c3 |
| 13-Dec-2016 |
Joel Fernandes <[email protected]> |
llist: Clarify comments about when locking is needed
llist.h comments are confusing about when locking is needed versus when it isn't. Clarify these comments by being more descriptive about why lock
llist: Clarify comments about when locking is needed
llist.h comments are confusing about when locking is needed versus when it isn't. Clarify these comments by being more descriptive about why locking is needed for llist_del_first.
Cc: Ingo Molnar <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul McKenney <[email protected]> Acked-by: Huang Ying <[email protected]> Acked-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Joel Fernandes <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
show more ...
|
|
Revision tags: v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7, v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7, v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3, v4.7-rc2, v4.7-rc1, v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5, v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1, v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5, v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6 |
|
| #
cd074aea |
| 06-Aug-2015 |
Will Deacon <[email protected]> |
locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
Including an asm/ header directly is best avoided, so use linux/atomic.h instead of asm/cmpxchg.h in linux/llist.h.
Signed-off-by
locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
Including an asm/ header directly is best avoided, so use linux/atomic.h instead of asm/cmpxchg.h in linux/llist.h.
Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6, v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2, v4.1-rc1, v4.0, v4.0-rc7, v4.0-rc6, v4.0-rc5, v4.0-rc4, v4.0-rc3, v4.0-rc2, v4.0-rc1, v3.19, v3.19-rc7, v3.19-rc6, v3.19-rc5, v3.19-rc4, v3.19-rc3, v3.19-rc2, v3.19-rc1, v3.18, v3.18-rc7, v3.18-rc6, v3.18-rc5, v3.18-rc4, v3.18-rc3, v3.18-rc2, v3.18-rc1, v3.17, v3.17-rc7, v3.17-rc6, v3.17-rc5, v3.17-rc4, v3.17-rc3, v3.17-rc2, v3.17-rc1, v3.16, v3.16-rc7, v3.16-rc6, v3.16-rc5, v3.16-rc4, v3.16-rc3, v3.16-rc2, v3.16-rc1, v3.15, v3.15-rc8, v3.15-rc7, v3.15-rc6, v3.15-rc5, v3.15-rc4, v3.15-rc3, v3.15-rc2, v3.15-rc1, v3.14, v3.14-rc8, v3.14-rc7, v3.14-rc6, v3.14-rc5, v3.14-rc4, v3.14-rc3, v3.14-rc2, v3.14-rc1, v3.13, v3.13-rc8, v3.13-rc7, v3.13-rc6, v3.13-rc5, v3.13-rc4, v3.13-rc3, v3.13-rc2, v3.13-rc1 |
|
| #
b89241e8 |
| 14-Nov-2013 |
Christoph Hellwig <[email protected]> |
llists: move llist_reverse_order from raid5 to llist.c
Make this useful helper available for other users.
Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Jens
llists: move llist_reverse_order from raid5 to llist.c
Make this useful helper available for other users.
Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Neil Brown <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v3.12, v3.12-rc7, v3.12-rc6, v3.12-rc5, v3.12-rc4, v3.12-rc3, v3.12-rc2, v3.12-rc1, v3.11, v3.11-rc7, v3.11-rc6, v3.11-rc5, v3.11-rc4, v3.11-rc3, v3.11-rc2, v3.11-rc1, v3.10, v3.10-rc7, v3.10-rc6 |
|
| #
809850b7 |
| 15-Jun-2013 |
Peter Hurley <[email protected]> |
tty: Use lockless flip buffer free list
In preparation for lockless flip buffers, make the flip buffer free list lockless.
NB: using llist is not the optimal solution, as the driver and buffer work
tty: Use lockless flip buffer free list
In preparation for lockless flip buffers, make the flip buffer free list lockless.
NB: using llist is not the optimal solution, as the driver and buffer work may contend over the llist head unnecessarily. However, test measurements indicate this contention is low.
Signed-off-by: Peter Hurley <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
| #
e9a17bd7 |
| 08-Jul-2013 |
Oleg Nesterov <[email protected]> |
llist: llist_add() can use llist_add_batch()
llist_add(new, head) can simply use llist_add_batch(new, new, head), no need to duplicate the code.
This obviously uninlines llist_add() and to me this
llist: llist_add() can use llist_add_batch()
llist_add(new, head) can simply use llist_add_batch(new, new, head), no need to duplicate the code.
This obviously uninlines llist_add() and to me this is a win. But we can make llist_add_batch() inline if this is desirable, in this case gcc can notice that new_first == new_last if the caller is llist_add().
Signed-off-by: Oleg Nesterov <[email protected]> Cc: Al Viro <[email protected]> Cc: Andrey Vagin <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: David Howells <[email protected]> Cc: Huang Ying <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Al Viro <[email protected]>
show more ...
|
| #
fb4214db |
| 08-Jul-2013 |
Oleg Nesterov <[email protected]> |
llist: fix/simplify llist_add() and llist_add_batch()
1. This is mostly theoretical, but llist_add*() need ACCESS_ONCE().
Otherwise it is not guaranteed that the first cmpxchg() uses the same
llist: fix/simplify llist_add() and llist_add_batch()
1. This is mostly theoretical, but llist_add*() need ACCESS_ONCE().
Otherwise it is not guaranteed that the first cmpxchg() uses the same value for old_entry and new_last->next.
2. These helpers cache the result of cmpxchg() and read the initial value of head->first before the main loop. I do not think this makes sense. In the likely case cmpxchg() succeeds, otherwise it doesn't hurt to reload head->first.
I think it would be better to simplify the code and simply read ->first before cmpxchg().
Signed-off-by: Oleg Nesterov <[email protected]> Cc: Al Viro <[email protected]> Cc: Andrey Vagin <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: David Howells <[email protected]> Cc: Huang Ying <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Al Viro <[email protected]>
show more ...
|
|
Revision tags: v3.10-rc5, v3.10-rc4, v3.10-rc3, v3.10-rc2, v3.10-rc1, v3.9, v3.9-rc8, v3.9-rc7, v3.9-rc6, v3.9-rc5, v3.9-rc4, v3.9-rc3, v3.9-rc2, v3.9-rc1, v3.8 |
|
| #
f84adf49 |
| 13-Feb-2013 |
Konrad Rzeszutek Wilk <[email protected]> |
xen-blkfront: drop the use of llist_for_each_entry_safe
Replace llist_for_each_entry_safe with a while loop.
llist_for_each_entry_safe can trigger a bug in GCC 4.1, so it's best to remove it and us
xen-blkfront: drop the use of llist_for_each_entry_safe
Replace llist_for_each_entry_safe with a while loop.
llist_for_each_entry_safe can trigger a bug in GCC 4.1, so it's best to remove it and use a while loop and do the deletion manually.
Specifically this bug can be triggered by hot-unplugging a disk, either by doing xm block-detach or by save/restore cycle.
BUG: unable to handle kernel paging request at fffffffffffffff0 IP: [<ffffffffa0047223>] blkif_free+0x63/0x130 [xen_blkfront] The crash call trace is: ... bad_area_nosemaphore+0x13/0x20 do_page_fault+0x25e/0x4b0 page_fault+0x25/0x30 ? blkif_free+0x63/0x130 [xen_blkfront] blkfront_resume+0x46/0xa0 [xen_blkfront] xenbus_dev_resume+0x6c/0x140 pm_op+0x192/0x1b0 device_resume+0x82/0x1e0 dpm_resume+0xc9/0x1a0 dpm_resume_end+0x15/0x30 do_suspend+0x117/0x1e0
When drilling down to the assembler code, on newer GCC it does .L29: cmpq $-16, %r12 #, persistent_gnt check je .L30 #, out of the loop .L25: ... code in the loop testq %r13, %r13 # n je .L29 #, back to the top of the loop cmpq $-16, %r12 #, persistent_gnt check movq 16(%r12), %r13 # <variable>.node.next, n jne .L25 #, back to the top of the loop .L30:
While on GCC 4.1, it is: L78: ... code in the loop testq %r13, %r13 # n je .L78 #, back to the top of the loop movq 16(%rbx), %r13 # <variable>.node.next, n jmp .L78 #, back to the top of the loop
Which basically means that the exit loop condition instead of being:
&(pos)->member != NULL;
is: ;
which makes the loop unbound.
Since xen-blkfront is the only user of the llist_for_each_entry_safe macro remove it from llist.h.
Orabug: 16263164 CC: [email protected] Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
show more ...
|
|
Revision tags: v3.8-rc7, v3.8-rc6, v3.8-rc5, v3.8-rc4, v3.8-rc3, v3.8-rc2, v3.8-rc1, v3.7 |
|
| #
ebb351cf |
| 04-Dec-2012 |
Roger Pau Monne <[email protected]> |
llist/xen-blkfront: implement safe version of llist_for_each_entry
Implement a safe version of llist_for_each_entry, and use it in blkif_free. Previously grants where freed while iterating the list,
llist/xen-blkfront: implement safe version of llist_for_each_entry
Implement a safe version of llist_for_each_entry, and use it in blkif_free. Previously grants where freed while iterating the list, which lead to dereferences when trying to fetch the next item.
Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Roger Pau Monné <[email protected]> Acked-by: Andrew Morton <[email protected]> [v2: Move the llist_for_each_entry_safe in llist.h] Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
show more ...
|
|
Revision tags: v3.7-rc8, v3.7-rc7, v3.7-rc6, v3.7-rc5, v3.7-rc4, v3.7-rc3, v3.7-rc2, v3.7-rc1, v3.6, v3.6-rc7, v3.6-rc6, v3.6-rc5, v3.6-rc4, v3.6-rc3, v3.6-rc2, v3.6-rc1, v3.5, v3.5-rc7, v3.5-rc6, v3.5-rc5, v3.5-rc4, v3.5-rc3, v3.5-rc2, v3.5-rc1, v3.4, v3.4-rc7, v3.4-rc6, v3.4-rc5, v3.4-rc4, v3.4-rc3, v3.4-rc2, v3.4-rc1 |
|
| #
96f951ed |
| 28-Mar-2012 |
David Howells <[email protected]> |
Add #includes needed to permit the removal of asm/system.h
asm/system.h is a cause of circular dependency problems because it contains commonly used primitive stuff like barrier definitions and unco
Add #includes needed to permit the removal of asm/system.h
asm/system.h is a cause of circular dependency problems because it contains commonly used primitive stuff like barrier definitions and uncommonly used stuff like switch_to() that might require MMU definitions.
asm/system.h has been disintegrated by this point on all arches into the following common segments:
(1) asm/barrier.h
Moved memory barrier definitions here.
(2) asm/cmpxchg.h
Moved xchg() and cmpxchg() here. #included in asm/atomic.h.
(3) asm/bug.h
Moved die() and similar here.
(4) asm/exec.h
Moved arch_align_stack() here.
(5) asm/elf.h
Moved AT_VECTOR_SIZE_ARCH here.
(6) asm/switch_to.h
Moved switch_to() here.
Signed-off-by: David Howells <[email protected]>
show more ...
|
|
Revision tags: v3.3, v3.3-rc7, v3.3-rc6, v3.3-rc5, v3.3-rc4, v3.3-rc3, v3.3-rc2, v3.3-rc1, v3.2, v3.2-rc7, v3.2-rc6, v3.2-rc5, v3.2-rc4, v3.2-rc3, v3.2-rc2, v3.2-rc1 |
|
| #
fc23af34 |
| 01-Nov-2011 |
Andrew Morton <[email protected]> |
llist-return-whether-list-is-empty-before-adding-in-llist_add-fix
clarify comment
Cc: Huang Ying <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Peter Zijlstra <pe
llist-return-whether-list-is-empty-before-adding-in-llist_add-fix
clarify comment
Cc: Huang Ying <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v3.1, v3.1-rc10 |
|
| #
540f41ed |
| 05-Oct-2011 |
Stephen Rothwell <[email protected]> |
llist: Add back llist_add_batch() and llist_del_first() prototypes
Commit 1230db8e1543 ("llist: Make some llist functions inline") has deleted the definitions, causing problems for (not upstream yet
llist: Add back llist_add_batch() and llist_del_first() prototypes
Commit 1230db8e1543 ("llist: Make some llist functions inline") has deleted the definitions, causing problems for (not upstream yet) code that tries to make use of them.
Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Huang Ying <[email protected]> Cc: David Miller <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v3.1-rc9, v3.1-rc8, v3.1-rc7, v3.1-rc6 |
|
| #
f0f1d32f |
| 12-Sep-2011 |
Peter Zijlstra <[email protected]> |
llist: Remove cpu_relax() usage in cmpxchg loops
Initial benchmarks show they're a net loss:
$ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i; done $ ec
llist: Remove cpu_relax() usage in cmpxchg loops
Initial benchmarks show they're a net loss:
$ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i; done $ echo 4096 32000 64 128 > /proc/sys/kernel/sem $ ./sembench -t 2048 -w 1900 -o 0
Pre:
run time 30 seconds 778936 worker burns per second run time 30 seconds 912190 worker burns per second run time 30 seconds 817506 worker burns per second run time 30 seconds 830870 worker burns per second run time 30 seconds 845056 worker burns per second
Post:
run time 30 seconds 905920 worker burns per second run time 30 seconds 849046 worker burns per second run time 30 seconds 886286 worker burns per second run time 30 seconds 822320 worker burns per second run time 30 seconds 900283 worker burns per second
So about 4% faster. (!)
cpu_relax() stalls the pipeline, therefore, when used in a tight loop it has the following benefits:
- allows SMT siblings to have a go; - reduces pressure on the CPU interconnect.
However, cmpxchg loops are unfair and thus have unbounded completion time, therefore we should avoid getting in such heavily contended situations where the above benefits make any difference.
A typical cmpxchg loop should not go round more than a handfull of times at worst, therefore adding extra delays just slows things down.
Since the llist primitives are new, there aren't any bad users yet, and we should avoid growing them. Heavily contended sites should generally be better off using the ticket locks for serialization since they provide bounded completion times (fifo-fair over the cpus).
Signed-off-by: Peter Zijlstra <[email protected]> Cc: Huang Ying <[email protected]> Cc: Andrew Morton <[email protected]> Link: http://lkml.kernel.org/r/1315836358.26517.43.camel@twins Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
924f8f5a |
| 12-Sep-2011 |
Peter Zijlstra <[email protected]> |
llist: Add llist_next()
So we don't have to expose the struct list_node member.
Cc: Huang Ying <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Peter Zijlstra <a.
llist: Add llist_next()
So we don't have to expose the struct list_node member.
Cc: Huang Ying <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/1315836348.26517.41.camel@twins Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
781f7fd9 |
| 08-Sep-2011 |
Huang Ying <[email protected]> |
llist: Return whether list is empty before adding in llist_add()
Extend the llist_add*() functions to return a success indicator, this allows us in the scheduler code to send an IPI if the queue was
llist: Return whether list is empty before adding in llist_add()
Extend the llist_add*() functions to return a success indicator, this allows us in the scheduler code to send an IPI if the queue was empty.
( There's no effect on existing users, because the list_add_xxx() functions are inline, thus this will be optimized out by the compiler if not used by callers. )
Signed-off-by: Huang Ying <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
a3127336 |
| 08-Sep-2011 |
Huang Ying <[email protected]> |
llist: Move cpu_relax() to after the cmpxchg()
If in llist_add()/etc. functions the first cmpxchg() call succeeds, it is not necessary to use cpu_relax() before the cmpxchg(). So cpu_relax() in a bu
llist: Move cpu_relax() to after the cmpxchg()
If in llist_add()/etc. functions the first cmpxchg() call succeeds, it is not necessary to use cpu_relax() before the cmpxchg(). So cpu_relax() in a busy loop involving cmpxchg() should go after cmpxchg() instead of before that.
This patch fixes this for all involved llist functions.
Signed-off-by: Huang Ying <[email protected]> Acked-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
2c30245c |
| 04-Oct-2011 |
Ingo Molnar <[email protected]> |
llist: Remove the platform-dependent NMI checks
Remove the nmi() checks spread around the code. in_nmi() is not available on every architecture and it's a pretty obscure and ugly check in any case.
llist: Remove the platform-dependent NMI checks
Remove the nmi() checks spread around the code. in_nmi() is not available on every architecture and it's a pretty obscure and ugly check in any case.
Cc: Huang Ying <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
1230db8e |
| 08-Sep-2011 |
Huang Ying <[email protected]> |
llist: Make some llist functions inline
Because llist code will be used in performance critical scheduler code path, make llist_add() and llist_del_all() inline to avoid function calling overhead an
llist: Make some llist functions inline
Because llist code will be used in performance critical scheduler code path, make llist_add() and llist_del_all() inline to avoid function calling overhead and related 'glue' overhead.
Signed-off-by: Huang Ying <[email protected]> Acked-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|