|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6 |
|
| #
d12157ef |
| 05-Jun-2023 |
Mark Rutland <[email protected]> |
locking/atomic: make atomic*_{cmp,}xchg optional
Most architectures define the atomic/atomic64 xchg and cmpxchg operations in terms of arch_xchg and arch_cmpxchg respectfully.
Add fallbacks for the
locking/atomic: make atomic*_{cmp,}xchg optional
Most architectures define the atomic/atomic64 xchg and cmpxchg operations in terms of arch_xchg and arch_cmpxchg respectfully.
Add fallbacks for these cases and remove the trivial cases from arch code. On some architectures the existing definitions are kept as these are used to build other arch_atomic*() operations.
Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6 |
|
| #
301014cf |
| 11-May-2020 |
Vineet Gupta <[email protected]> |
ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants
And move them out of cmpxchg.h to canonical atomic.h
Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Vineet Gupta
ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants
And move them out of cmpxchg.h to canonical atomic.h
Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
b0f839b4 |
| 04-Aug-2021 |
Vineet Gupta <[email protected]> |
ARC: atomics: disintegrate header
Non functional change, to ease future addition/removal
Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
|
| #
6db5d993 |
| 25-May-2021 |
Mark Rutland <[email protected]> |
locking/atomic: arc: move to ARCH_ATOMIC
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomic
locking/atomic: arc: move to ARCH_ATOMIC
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers).
As a step towards that, this patch migrates alpha to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions.
Signed-off-by: Mark Rutland <[email protected]> Acked-by: Vineet Gupta <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
6364d1b4 |
| 06-Oct-2020 |
Randy Dunlap <[email protected]> |
arc: include/asm: fix typos of "themselves"
Fix copy/paste spello of "themselves" in 3 places.
Signed-off-by: Randy Dunlap <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: linux-s
arc: include/asm: fix typos of "themselves"
Fix copy/paste spello of "themselves" in 3 places.
Signed-off-by: Randy Dunlap <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: [email protected] Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
dd7c7ab0 |
| 26-Aug-2020 |
Vineet Gupta <[email protected]> |
ARC: [plat-eznps]: Drop support for EZChip NPS platform
NPS customers are no longer doing active development, as evident from rand config build failures reported in recent times, so drop support for
ARC: [plat-eznps]: Drop support for EZChip NPS platform
NPS customers are no longer doing active development, as evident from rand config build failures reported in recent times, so drop support for NPS platform.
Tested-by: kernel test robot <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
7ca8cf53 |
| 29-Jul-2020 |
Herbert Xu <[email protected]> |
locking/atomic: Move ATOMIC_INIT into linux/types.h
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h. This allows users of atomic_t to use ATOMIC_INIT without having to include atom
locking/atomic: Move ATOMIC_INIT into linux/types.h
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h. This allows users of atomic_t to use ATOMIC_INIT without having to include atomic.h as that way may lead to header loops.
Signed-off-by: Herbert Xu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Waiman Long <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
d2912cb1 |
| 04-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc3, v5.2-rc2 |
|
| #
16fbad08 |
| 22-May-2019 |
Mark Rutland <[email protected]> |
locking/atomic, arc: Use s64 for atomic64
As a step towards making the atomic64 API use consistent types treewide, let's have the arc atomic64 implementation use s64 as the underlying type for atomi
locking/atomic, arc: Use s64 for atomic64
As a step towards making the atomic64 API use consistent types treewide, let's have the arc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than u64, matching the generated headers.
Otherwise, there should be no functional change as a result of this patch.
Acked-By: Vineet Gupta <[email protected]> Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2 |
|
| #
3fcbb826 |
| 30-Aug-2018 |
Will Deacon <[email protected]> |
ARC: atomics: unbork atomic_fetch_##op()
In 4.19-rc1, Eugeniy reported weird boot and IO errors on ARC HSDK
| INFO: task syslogd:77 blocked for more than 10 seconds. | Not tainted 4.19.0-rc1-
ARC: atomics: unbork atomic_fetch_##op()
In 4.19-rc1, Eugeniy reported weird boot and IO errors on ARC HSDK
| INFO: task syslogd:77 blocked for more than 10 seconds. | Not tainted 4.19.0-rc1-00007-gf213acea4e88 #40 | "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this | message. | syslogd D 0 77 76 0x00000000 | | Stack Trace: | __switch_to+0x0/0xac | __schedule+0x1b2/0x730 | io_schedule+0x5c/0xc0 | __lock_page+0x98/0xdc | find_lock_entry+0x38/0x100 | shmem_getpage_gfp.isra.3+0x82/0xbfc | shmem_fault+0x46/0x138 | handle_mm_fault+0x5bc/0x924 | do_page_fault+0x100/0x2b8 | ret_from_exception+0x0/0x8
He bisected to 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()")
This commit however only unmasked the real issue introduced by commit 4aef66c8ae9 ("locking/atomic, arch/arc: Fix build") which missed the retry-if-scond-failed branch in atomic_fetch_##op() macros.
The bisected commit started using atomic_fetch_##op() macros for building the rest of atomics.
Fixes: 4aef66c8ae9 ("locking/atomic, arch/arc: Fix build") Reported-by: Eugeniy Paltsev <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Vineet Gupta <[email protected]> [vgupta: wrote changelog]
show more ...
|
|
Revision tags: v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2 |
|
| #
7cc7eaad |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Clean up '*_andnot()' ifdeffery
The ifdeffery for atomic*_{fetch_,}andnot() is unlike that for all the other atomics. If atomic*_andnot() is not defined, the corresponding atomic*_
atomics/treewide: Clean up '*_andnot()' ifdeffery
The ifdeffery for atomic*_{fetch_,}andnot() is unlike that for all the other atomics. If atomic*_andnot() is not defined, the corresponding atomic*_fetch_andnot() is assumed to not be defined.
Additionally, the fallbacks for the various ordering cases are written much later in atomic.h as static inlines.
This isn't problematic today, but gets in the way of scripting the generation of atomics. To prepare for scripting, this patch:
* Switches to separate ifdefs for atomic*_andnot() and atomic*_fetch_andnot(), updating implementations as appropriate.
* Moves the fallbacks into the standards ifdefs, as macro expansions rather than static inlines.
* Removes trivial andnot implementations from architectures, where these are superseded by core code.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
b3a2a05f |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Make conditional inc/dec ops optional
The conditional inc/dec ops differ for atomic_t and atomic64_t:
- atomic_inc_unless_positive() is optional for atomic_t, and doesn't exist fo
atomics/treewide: Make conditional inc/dec ops optional
The conditional inc/dec ops differ for atomic_t and atomic64_t:
- atomic_inc_unless_positive() is optional for atomic_t, and doesn't exist for atomic64_t. - atomic_dec_unless_negative() is optional for atomic_t, and doesn't exist for atomic64_t. - atomic_dec_if_positive is optional for atomic_t, and is mandatory for atomic64_t.
Let's make these consistently optional for both. At the same time, let's clean up the existing fallbacks to use atomic_try_cmpxchg().
The instrumented atomics are updated accordingly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
9837559d |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Make unconditional inc/dec ops optional
Many of the inc/dec ops are mandatory, but for most architectures inc/dec are simply trivial wrappers around their corresponding add/sub ops
atomics/treewide: Make unconditional inc/dec ops optional
Many of the inc/dec ops are mandatory, but for most architectures inc/dec are simply trivial wrappers around their corresponding add/sub ops.
Let's make all the inc/dec ops optional, so that we can get rid of these boilerplate wrappers.
The instrumented atomics are updated accordingly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
18cc1814 |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Make test ops optional
Some of the atomics return the result of a test applied after the atomic operation, and almost all architectures implement these as trivial wrappers around t
atomics/treewide: Make test ops optional
Some of the atomics return the result of a test applied after the atomic operation, and almost all architectures implement these as trivial wrappers around the underlying atomic. Specifically:
* <atomic>_inc_and_test(v) is (<atomic>_inc_return(v) == 0) * <atomic>_dec_and_test(v) is (<atomic>_dec_return(v) == 0) * <atomic>_sub_and_test(i, v) is (<atomic>_sub_return(i, v) == 0) * <atomic>_add_negative(i, v) is (<atomic>_add_return(i, v) < 0)
Rather than have these definitions duplicated in all architectures, with minor inconsistencies in formatting and documentation, let's make these operations optional, with default fallbacks as above. Implementations must now provide a preprocessor symbol.
The instrumented atomics are updated accordingly.
Both x86 and m68k have custom implementations, which are left as-is, given preprocessor symbols to avoid being overridden.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
ab0b9104 |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/arc: Define atomic64_fetch_add_unless()
As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/arc implementation of atomic64_add_unless() into an impl
atomics/arc: Define atomic64_fetch_add_unless()
As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/arc implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless().
A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of this, provided it is given a preprocessor definition.
No functional change is intended as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vineet Gupta <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
eccc2da8 |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Make atomic_fetch_add_unless() optional
Several architectures these have a near-identical implementation based on atomic_read() and atomic_cmpxchg() which we can instead define in
atomics/treewide: Make atomic_fetch_add_unless() optional
Several architectures these have a near-identical implementation based on atomic_read() and atomic_cmpxchg() which we can instead define in <linux/atomic.h>, so let's do so, using something close to the existing x86 implementation with try_cmpxchg().
Where an architecture provides its own atomic_fetch_add_unless(), it must define a preprocessor symbol for it. The instrumented atomics are updated accordingly.
Note that arch/arc's existing atomic_fetch_add_unless() had redundant barriers, as these are already present in its atomic_cmpxchg() implementation.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vineet Gupta <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
bef82820 |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Make atomic64_inc_not_zero() optional
We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to defin
atomics/treewide: Make atomic64_inc_not_zero() optional
We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to define the same boilerplate.
Let's add a fallback in <linux/atomic.h>, and remove the redundant implementations. Note that atomic64_add_unless() is always defined in <linux/atomic.h>, and promotes its arguments to the requisite types, so we need not do this explicitly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
8b47038e |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Remove redundant atomic_inc_not_zero() definitions
When atomic_inc_not_zero(v) isn't defined, <linux/atomic.h> will define it as falling back to atomic_add_unless((v), 1, 0), so th
atomics/treewide: Remove redundant atomic_inc_not_zero() definitions
When atomic_inc_not_zero(v) isn't defined, <linux/atomic.h> will define it as falling back to atomic_add_unless((v), 1, 0), so there's no need for arch code to do so.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
bfc18e38 |
| 21-Jun-2018 |
Mark Rutland <[email protected]> |
atomics/treewide: Rename __atomic_add_unless() => atomic_fetch_add_unless()
While __atomic_add_unless() was originally intended as a building-block for atomic_add_unless(), it's now used in a number
atomics/treewide: Rename __atomic_add_unless() => atomic_fetch_add_unless()
While __atomic_add_unless() was originally intended as a building-block for atomic_add_unless(), it's now used in a number of places around the kernel. It's the only common atomic operation named __atomic*(), rather than atomic_*(), and for consistency it would be better named atomic_fetch_add_unless().
This lack of consistency is slightly confusing, and gets in the way of scripting atomics. Given that, let's clean things up and promote it to an official part of the atomics API, in the form of atomic_fetch_add_unless().
This patch converts definitions and invocations over to the new name, including the instrumented version, using the following script:
---- git grep -w __atomic_add_unless | while read line; do sed -i '{s/\<__atomic_add_unless\>/atomic_fetch_add_unless/}' "${line%%:*}"; done git grep -w __arch_atomic_add_unless | while read line; do sed -i '{s/\<__arch_atomic_add_unless\>/arch_atomic_fetch_add_unless/}' "${line%%:*}"; done ----
Note that we do not have atomic{64,_long}_fetch_add_unless(), which will be introduced by later patches.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Will Deacon <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6, v4.12-rc5 |
|
| #
9d664c0a |
| 09-Jun-2017 |
Peter Zijlstra <[email protected]> |
locking/atomic: Fix atomic_set_release() for 'funny' architectures
Those architectures that have a special atomic_set implementation also need a special atomic_set_release(), because for the very sa
locking/atomic: Fix atomic_set_release() for 'funny' architectures
Those architectures that have a special atomic_set implementation also need a special atomic_set_release(), because for the very same reason WRITE_ONCE() is broken for them, smp_store_release() is too.
The vast majority is architectures that have spinlock hash based atomic implementation except hexagon which seems to have a hardware 'feature'.
The spinlock based atomics should be SC, that is, none of them appear to place extra barriers in atomic_cmpxchg() or any of the other SC atomic primitives and therefore seem to rely on their spinlock implementation being SC (I did not fully validate all that).
Therefore, the normal atomic_set() is SC and can be used at atomic_set_release().
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Chris Metcalf <[email protected]> [for tile] Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6 |
|
| #
6492f09e |
| 04-Apr-2017 |
Noam Camus <[email protected]> |
ARC: [plat-eznps] Fix build error
Make ATOMIC_INIT available for all ARC platforms (including plat-eznps)
Cc: <[email protected]> # 4.9+ Signed-off-by: Noam Camus <[email protected]> Signed-
ARC: [plat-eznps] Fix build error
Make ATOMIC_INIT available for all ARC platforms (including plat-eznps)
Cc: <[email protected]> # 4.9+ Signed-off-by: Noam Camus <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
|
Revision tags: v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8, v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1, v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8 |
|
| #
ce0f4932 |
| 19-Sep-2016 |
Noam Camus <[email protected]> |
ARC: [plat-eznps] add missing atomic_fetch_xxx operations
Build brekeage since last changes to generic atomic operations. Added couple of missing macros which are now mandatory
Signed-off-by: Noam
ARC: [plat-eznps] add missing atomic_fetch_xxx operations
Build brekeage since last changes to generic atomic operations. Added couple of missing macros which are now mandatory
Signed-off-by: Noam Camus <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
|
Revision tags: v4.8-rc7, v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7, v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3, v4.7-rc2, v4.7-rc1, v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5, v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1, v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5, v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5 |
|
| #
ce636527 |
| 27-Jul-2015 |
Vineet Gupta <[email protected]> |
ARCv2: Implement atomic64 based on LLOCKD/SCONDD instructions
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement the 64-bit atomics and elide the spinlock based generic 64-bit
ARCv2: Implement atomic64 based on LLOCKD/SCONDD instructions
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement the 64-bit atomics and elide the spinlock based generic 64-bit atomics
boot tested with atomic64 self-test (and GOD bless the person who wrote them, I realized my inline assmebly is sloppy as hell)
Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
4aef66c8 |
| 17-Jun-2016 |
Peter Zijlstra <[email protected]> |
locking/atomic, arch/arc: Fix build
Resolve conflict between commits:
fbffe892e525 ("locking/atomic, arch/arc: Implement atomic_fetch_{add,sub,and,andnot,or,xor}()")
and:
ed6aefed726a ("Rever
locking/atomic, arch/arc: Fix build
Resolve conflict between commits:
fbffe892e525 ("locking/atomic, arch/arc: Implement atomic_fetch_{add,sub,and,andnot,or,xor}()")
and:
ed6aefed726a ("Revert "ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff"")
Reported-by: Guenter Roeck <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nigel Topham <[email protected]> Cc: Noam Camus <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
b53d6bed |
| 17-Apr-2016 |
Peter Zijlstra <[email protected]> |
locking/atomic: Remove linux/atomic.h:atomic_fetch_or()
Since all architectures have this implemented now natively, remove this dead code.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]
locking/atomic: Remove linux/atomic.h:atomic_fetch_or()
Since all architectures have this implemented now natively, remove this dead code.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|