|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6 |
|
| #
fd476197 |
| 14-May-2020 |
Vineet Gupta <[email protected]> |
ARC: __switch_to: move ksp to thread_info from thread_struct
task's arch specific bits are carried in 2 places - embedded thread_struct in task_struct - associated thread_info (hoisted in task's s
ARC: __switch_to: move ksp to thread_info from thread_struct
task's arch specific bits are carried in 2 places - embedded thread_struct in task_struct - associated thread_info (hoisted in task's stack page) and syntactically: (thread_info *)(task_struct->stack)
ksp (dynamic kernel stack top) currently lives in thread_struct but given its deep location in task struct likely to cache miss when accessed from __switch_to(). Moving it to thread_info would be more efficient given proximity to frequently accessed items such as preempt_count thus very likely to be in cache, specially in schedular code.
Note however that currently tsk.thread.ksp takes 1 memory access (off of tsk pointer) while new code tsk->stack.ksp would take 2, but likely to be in cache. Moreover if task is current the 2nd reference can be elided and instead derived from SP as (SP & ~(THREAD_SIZE - 1))
All of this also makes __switch_to() code simpler and we can see the 2 ways of retirving ksp (descrobed above) in new code.
Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
d1d1569e |
| 14-May-2020 |
Vineet Gupta <[email protected]> |
ARC: kernel stack: INIT_THREAD need not setup @init_stack in @ksp
There are 2 pointers to kernel mode stack of a task - task_struct.stack: base address of stack page (max possible stack top) - thr
ARC: kernel stack: INIT_THREAD need not setup @init_stack in @ksp
There are 2 pointers to kernel mode stack of a task - task_struct.stack: base address of stack page (max possible stack top) - thread_info.ksp : runtime stack top in __switch_to
INIT_THREAD was setting up ksp to stack base which was not really needed - it would get overwritten with dynamic value on first call to __switch_to when init is switched out for the very first time. - generic code already does init_task.stack = init_stack and ARC code uses that to retrieve task's stack base.
Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
2be9880d |
| 19-Aug-2022 |
Kefeng Wang <[email protected]> |
kernel: exit: cleanup release_thread()
Only x86 has own release_thread(), introduce a new weak release_thread() function to clean empty definitions in other ARCHs.
Link: https://lkml.kernel.org/r/2
kernel: exit: cleanup release_thread()
Only x86 has own release_thread(), introduce a new weak release_thread() function to clean empty definitions in other ARCHs.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Acked-by: Guo Ren <[email protected]> [csky] Acked-by: Russell King (Oracle) <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Brian Cain <[email protected]> Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Stafford Horne <[email protected]> [openrisc] Acked-by: Catalin Marinas <[email protected]> [arm64] Acked-by: Huacai Chen <[email protected]> [LoongArch] Cc: Alexander Gordeev <[email protected]> Cc: Anton Ivanov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Guo Ren <[email protected]> [csky] Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: James Bottomley <[email protected]> Cc: Johannes Berg <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Xuerui Wang <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
42a20f86 |
| 29-Sep-2021 |
Kees Cook <[email protected]> |
sched: Add wrapper for get_wchan() to keep task blocked
Having a stable wchan means the process must be blocked and for it to stay that way while performing stack unwinding.
Suggested-by: Peter Zij
sched: Add wrapper for get_wchan() to keep task blocked
Having a stable wchan means the process must be blocked and for it to stay that way while performing stack unwinding.
Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Kees Cook <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Russell King (Oracle) <[email protected]> [arm] Tested-by: Mark Rutland <[email protected]> [arm64] Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
2dde02ab |
| 01-Oct-2020 |
Vineet Gupta <[email protected]> |
ARC: mm: support 3 levels of page tables
ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for sam
ARC: mm: support 3 levels of page tables
ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for same. It is also fun to try multi levels even on soft-walked code to ensure generic mm code is robust to handle.
overview ________
2 levels {pgd, pte} : pmd is folded but pmd_* macros are valid and operate on pgd 3 levels {pgd, pmd, pte}: - pud is folded and pud_* macros point to pgd - pmd_* macros operate on actual pmd
code changes ____________
1. #include <asm-generic/pgtable-nopud.h>
2. Define CONFIG_PGTABLE_LEVELS 3
3a. Define PMD_SHIFT, PMD_SIZE, PMD_MASK, pmd_t 3b. Define pmd_val() which actually deals with pmd (pmd_offset(), pmd_index() are provided by generic code) 3c. pmd_alloc_one()/pmd_free() also provided by generic code (pmd_populate/pmd_free already exist)
4. Define pud_none(), pud_bad() macros based on generic pud_val() which internally pertains to pgd now. 4b. define pud_populate() to just setup pgd
Acked-by: Mike Rapoport <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
dd7c7ab0 |
| 26-Aug-2020 |
Vineet Gupta <[email protected]> |
ARC: [plat-eznps]: Drop support for EZChip NPS platform
NPS customers are no longer doing active development, as evident from rand config build failures reported in recent times, so drop support for
ARC: [plat-eznps]: Drop support for EZChip NPS platform
NPS customers are no longer doing active development, as evident from rand config build failures reported in recent times, so drop support for NPS platform.
Tested-by: kernel test robot <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
|
Revision tags: v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5 |
|
| #
7321e2ea |
| 05-Mar-2020 |
Eugeniy Paltsev <[email protected]> |
ARC: add support for DSP-enabled userspace applications
To be able to run DSP-enabled userspace applications we need to save and restore following DSP-related registers: At IRQ/exception entry/exit:
ARC: add support for DSP-enabled userspace applications
To be able to run DSP-enabled userspace applications we need to save and restore following DSP-related registers: At IRQ/exception entry/exit: * DSP_CTRL (save it and reset to value suitable for kernel) * ACC0_LO, ACC0_HI (we already save them as r58, r59 pair) At context switch: * ACC0_GLO, ACC0_GHI * DSP_BFLY0, DSP_FFT_CTRL
Reviewed-by: Vineet Gupta <[email protected]> Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
|
Revision tags: v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7 |
|
| #
f05523aa |
| 18-Jan-2020 |
Vineet Gupta <[email protected]> |
ARC: fpu: declutter code, move bits out into fpu.h
Signed-off-by: Vineet Gupta <[email protected]>
|
|
Revision tags: v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
d2912cb1 |
| 04-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1 |
|
| #
de0d22e5 |
| 30-Oct-2018 |
Nick Desaulniers <[email protected]> |
treewide: remove current_text_addr
Prefer _THIS_IP_ defined in linux/kernel.h.
Most definitions of current_text_addr were the same as _THIS_IP_, but a few archs had inline assembly instead.
This p
treewide: remove current_text_addr
Prefer _THIS_IP_ defined in linux/kernel.h.
Most definitions of current_text_addr were the same as _THIS_IP_, but a few archs had inline assembly instead.
This patch removes the final call site of current_text_addr, making all of the definitions dead code.
[[email protected]: fix arch/csky/include/asm/processor.h] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Nick Desaulniers <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2 |
|
| #
c17c0204 |
| 22-Sep-2017 |
Tobias Klauser <[email protected]> |
arch: remove unused *_segments() macros/functions
Some architectures define the no-op macros/functions copy_segments, release_segments and forget_segments. These are used nowhere in the tree, so rem
arch: remove unused *_segments() macros/functions
Some architectures define the no-op macros/functions copy_segments, release_segments and forget_segments. These are used nowhere in the tree, so removed them.
Signed-off-by: Tobias Klauser <[email protected]> Acked-by: Vineet Gupta <[email protected]> [for arch/arc] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6 |
|
| #
5b2189ab |
| 15-Jun-2017 |
Noam Camus <[email protected]> |
ARC: [plat-eznps] handle extra aux regs #1: save/restore on context switch
save EFLAGS, and GPA1 auxiliary registers during context switch, since they may be changed by the new task in kernel mode,
ARC: [plat-eznps] handle extra aux regs #1: save/restore on context switch
save EFLAGS, and GPA1 auxiliary registers during context switch, since they may be changed by the new task in kernel mode, while using atomic ops e.g. cmpxchg.
Signed-off-by: Noam Camus <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
6474924e |
| 28-Jun-2017 |
Tobias Klauser <[email protected]> |
arch: remove unused macro/function thread_saved_pc()
The only user of thread_saved_pc() in non-arch-specific code was removed in commit 8243d5597793 ("sched/core: Remove pointless printout in sched_
arch: remove unused macro/function thread_saved_pc()
The only user of thread_saved_pc() in non-arch-specific code was removed in commit 8243d5597793 ("sched/core: Remove pointless printout in sched_show_task()"). Remove the implementations as well.
Some architectures use thread_saved_pc() in their arch-specific code. Leave their thread_saved_pc() intact.
Signed-off-by: Tobias Klauser <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Cc: Ingo Molnar <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8, v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1, v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6 |
|
| #
6d0d2878 |
| 16-Nov-2016 |
Christian Borntraeger <[email protected]> |
locking/core: Provide common cpu_relax_yield() definition
No need to duplicate the same define everywhere. Since the only user is stop-machine and the only provider is s390, we can use a default imp
locking/core: Provide common cpu_relax_yield() definition
No need to duplicate the same define everywhere. Since the only user is stop-machine and the only provider is s390, we can use a default implementation of cpu_relax_yield() in sched.h.
Suggested-by: Russell King <[email protected]> Signed-off-by: Christian Borntraeger <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Acked-by: Russell King <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Noam Camus <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: linux-s390 <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.9-rc5, v4.9-rc4, v4.9-rc3 |
|
| #
5bd0b85b |
| 25-Oct-2016 |
Christian Borntraeger <[email protected]> |
locking/core, arch: Remove cpu_relax_lowlatency()
As there are no users left, we can remove cpu_relax_lowlatency() implementations from every architecture.
Signed-off-by: Christian Borntraeger <bor
locking/core, arch: Remove cpu_relax_lowlatency()
As there are no users left, we can remove cpu_relax_lowlatency() implementations from every architecture.
Signed-off-by: Christian Borntraeger <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Noam Camus <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
79ab11cd |
| 25-Oct-2016 |
Christian Borntraeger <[email protected]> |
locking/core: Introduce cpu_relax_yield()
For spinning loops people do often use barrier() or cpu_relax(). For most architectures cpu_relax and barrier are the same, but on some architectures cpu_re
locking/core: Introduce cpu_relax_yield()
For spinning loops people do often use barrier() or cpu_relax(). For most architectures cpu_relax and barrier are the same, but on some architectures cpu_relax can add some latency. For example on power,sparc64 and arc, cpu_relax can shift the CPU towards other hardware threads in an SMT environment. On s390 cpu_relax does even more, it uses an hypercall to the hypervisor to give up the timeslice. In contrast to the SMT yielding this can result in larger latencies. In some places this latency is unwanted, so another variant "cpu_relax_lowlatency" was introduced. Before this is used in more and more places, lets revert the logic and provide a cpu_relax_yield that can be called in places where yielding is more important than latency. By default this is the same as cpu_relax on all architectures.
Signed-off-by: Christian Borntraeger <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Noam Camus <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7, v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7, v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3, v4.7-rc2, v4.7-rc1 |
|
| #
2547476a |
| 21-May-2016 |
Andrea Gelmini <[email protected]> |
Fix typos
Signed-off-by: Andrea Gelmini <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
|
|
Revision tags: v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5, v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1, v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5, v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6, v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2, v4.1-rc1, v4.0, v4.0-rc7, v4.0-rc6, v4.0-rc5, v4.0-rc4 |
|
| #
46c3e6b8 |
| 09-Mar-2015 |
Tal Zilcer <[email protected]> |
ARC: [plat-eznps] Use dedicated cpu_relax()
Since the CTOP is SMT hardware multi-threaded, we need to hint the HW that now will be a very good time to do a hardware thread context switching. This is
ARC: [plat-eznps] Use dedicated cpu_relax()
Since the CTOP is SMT hardware multi-threaded, we need to hint the HW that now will be a very good time to do a hardware thread context switching. This is done by issuing the schd.rw instruction (binary coded here so as to not require specific revision of GCC to build the kernel). sched.rw means that Thread becomes eligible for execution by the threads scheduler after all pending read/write transactions were completed.
Implementing cpu_relax_lowlatency() with barrier() Since with current semantics of cpu_relax() it may take a while till yielded CPU will get back.
Signed-off-by: Noam Camus <[email protected]> Cc: Peter Zijlstra <[email protected]> Acked-by: Vineet Gupta <[email protected]>
show more ...
|
| #
8bcf2c48 |
| 06-Dec-2015 |
Noam Camus <[email protected]> |
ARC: [plat-eznps] Use dedicated user stack top
NPS use special mapping right below TASK_SIZE. Hence we need to lower STACK_TOP so that user stack won't overlap NPS special mapping.
Signed-off-by: N
ARC: [plat-eznps] Use dedicated user stack top
NPS use special mapping right below TASK_SIZE. Hence we need to lower STACK_TOP so that user stack won't overlap NPS special mapping.
Signed-off-by: Noam Camus <[email protected]> Acked-by: Vineet Gupta <[email protected]>
show more ...
|
|
Revision tags: v4.0-rc3, v4.0-rc2, v4.0-rc1, v3.19, v3.19-rc7, v3.19-rc6, v3.19-rc5, v3.19-rc4, v3.19-rc3, v3.19-rc2, v3.19-rc1, v3.18, v3.18-rc7, v3.18-rc6, v3.18-rc5, v3.18-rc4, v3.18-rc3, v3.18-rc2, v3.18-rc1, v3.17, v3.17-rc7, v3.17-rc6, v3.17-rc5, v3.17-rc4 |
|
| #
15ca68a9 |
| 07-Sep-2014 |
Noam Camus <[email protected]> |
ARC: Make vmalloc size configurable
On ARC, lower 2G of address space is translated and used for - user vaddr space (region 0 to 5) - unused kernel-user gutter (region 6) - kernel vaddr space (re
ARC: Make vmalloc size configurable
On ARC, lower 2G of address space is translated and used for - user vaddr space (region 0 to 5) - unused kernel-user gutter (region 6) - kernel vaddr space (region 7)
where each region simply represents 256MB of address space.
The kernel vaddr space of 256MB is used to implement vmalloc, modules So far this was enough, but not on EZChip system with 4K CPUs (given that per cpu mechanism uses vmalloc for allocating chunks)
So allow VMALLOC_SIZE to be configurable by expanding down into the unused kernel-user gutter region which at default 256M was excessive anyways.
Also use _BITUL() to fix a build error since PGDIR_SIZE cannot use "1UL" as called from assembly code in mm/tlbex.S
Signed-off-by: Noam Camus <[email protected]> [vgupta: rewrote changelog, debugged bootup crash due to int vs. hex] Acked-by: Vineet Gupta <[email protected]>
show more ...
|
| #
1cfc05cb |
| 09-Nov-2015 |
Vineet Gupta <[email protected]> |
ARC: cpu_relax() to be compiler barrier even for UP
cpu_relax() on ARC has been barrier only for SMP (and no-op for UP). Per recent discussions, it is safer to make it a compiler barrier uncondition
ARC: cpu_relax() to be compiler barrier even for UP
cpu_relax() on ARC has been barrier only for SMP (and no-op for UP). Per recent discussions, it is safer to make it a compiler barrier unconditionally.
Link: http://lkml.kernel.org/r/[email protected] Acked-by: Peter Zijlstra <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
45890f6d |
| 09-Mar-2015 |
Vineet Gupta <[email protected]> |
ARC: mm: HIGHMEM: kmap API implementation
Implement kmap* API for ARC.
This enables - permanent kernel maps (pkmaps): :kmap() API - fixmap : kmap_atomic()
We use a very simple/uniform approach f
ARC: mm: HIGHMEM: kmap API implementation
Implement kmap* API for ARC.
This enables - permanent kernel maps (pkmaps): :kmap() API - fixmap : kmap_atomic()
We use a very simple/uniform approach for both (unlike some of the other arches). So fixmap doesn't use the customary compile time address stuff. The important semantic is sleep'ability (pkmap) vs. not (fixmap) which the API guarantees.
Note that this patch only enables highmem for subsequent PAE40 support as there is no real highmem for ARC in pure 32-bit paradigm as explained below.
ARC has 2:2 address split of the 32-bit address space with lower half being translated (virtual) while upper half unstranslated (0x8000_0000 to 0xFFFF_FFFF). kernel itself is linked at base of unstranslated space (i.e. 0x8000_0000 onwards), which is mapped to say DDR 0x0 by external Bus Glue logic (outside the core). So kernel can potentially access 1.75G worth of memory directly w/o need for highmem. (the top 256M is taken by uncached peripheral space from 0xF000_0000 to 0xFFFF_FFFF)
In PAE40, hardware can address memory beyond 4G (0x1_0000_0000) while the logical/virtual addresses remain 32-bits. Thus highmem is required for kernel proper to be able to access these pages for it's own purposes (user space is agnostic to this anyways).
Signed-off-by: Alexey Brodkin <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|
| #
4db27dca |
| 05-Mar-2015 |
Vineet Gupta <[email protected]> |
ARC: mm: document system mem map clearly
Signed-off-by: Vineet Gupta <[email protected]>
|
| #
1269f4d5 |
| 25-Apr-2015 |
Vineet Gupta <[email protected]> |
ARC: fix warning in sched due to thread_saved_pc()
Signed-off-by: Vineet Gupta <[email protected]>
|
| #
3240dd57 |
| 27-Feb-2015 |
Vineet Gupta <[email protected]> |
ARC: Fix thread_saved_pc()
The old implementation assumed that SP at the time of __switch_to() is right above pt_regs which is almost certainly not the case as there will be some stack build up betw
ARC: Fix thread_saved_pc()
The old implementation assumed that SP at the time of __switch_to() is right above pt_regs which is almost certainly not the case as there will be some stack build up between entry into kernel and leading up to __switch_to
Signed-off-by: Vineet Gupta <[email protected]>
show more ...
|