|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6 |
|
| #
19f0423f |
| 23-Feb-2024 |
Huang Yiwei <[email protected]> |
tracing: Support to dump instance traces by ftrace_dump_on_oops
Currently ftrace only dumps the global trace buffer on an OOPs. For debugging a production usecase, instance trace will be helpful to
tracing: Support to dump instance traces by ftrace_dump_on_oops
Currently ftrace only dumps the global trace buffer on an OOPs. For debugging a production usecase, instance trace will be helpful to check specific problems since global trace buffer may be used for other purposes.
This patch extend the ftrace_dump_on_oops parameter to dump a specific or multiple trace instances:
- ftrace_dump_on_oops=0: as before -- don't dump - ftrace_dump_on_oops[=1]: as before -- dump the global trace buffer on all CPUs - ftrace_dump_on_oops=2 or =orig_cpu: as before -- dump the global trace buffer on CPU that triggered the oops - ftrace_dump_on_oops=<instance_name>: new behavior -- dump the tracing instance matching <instance_name> - ftrace_dump_on_oops[=2/orig_cpu],<instance1_name>[=2/orig_cpu], <instrance2_name>[=2/orig_cpu]: new behavior -- dump the global trace buffer and multiple instance buffer on all CPUs, or only dump on CPU that triggered the oops if =2 or =orig_cpu is given
Also, the sysctl node can handle the input accordingly.
Link: https://lore.kernel.org/linux-trace-kernel/[email protected]
Cc: Ross Zwisler <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Signed-off-by: Huang Yiwei <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5 |
|
| #
57914905 |
| 03-Oct-2023 |
Andy Shevchenko <[email protected]> |
kernel.h: Move lib/cmdline.c prototypes to string.h
The lib/cmdline.c is basically a set of some small string parsers which are wide used in the kernel. Their prototypes belong to the string.h rathe
kernel.h: Move lib/cmdline.c prototypes to string.h
The lib/cmdline.c is basically a set of some small string parsers which are wide used in the kernel. Their prototypes belong to the string.h rather then kernel.h.
Signed-off-by: Andy Shevchenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
| #
adeb0436 |
| 14-Feb-2024 |
Andy Shevchenko <[email protected]> |
kernel.h: Move upper_*_bits() and lower_*_bits() to wordpart.h
The wordpart.h header is collecting APIs related to the handling parts of the word (usually in byte granularity). The upper_*_bits() an
kernel.h: Move upper_*_bits() and lower_*_bits() to wordpart.h
The wordpart.h header is collecting APIs related to the handling parts of the word (usually in byte granularity). The upper_*_bits() and lower_*_bits() are good candidates to be moved to there.
This helps to clean up header dependency hell with regard to kernel.h as the latter gathers completely unrelated stuff together and slows down compilation (especially when it's included into other header).
Signed-off-by: Andy Shevchenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Randy Dunlap <[email protected]> Signed-off-by: Kees Cook <[email protected]>
show more ...
|
| #
66a5c40f |
| 26-Dec-2023 |
Tanzir Hasan <[email protected]> |
kernel.h: removed REPEAT_BYTE from kernel.h
This patch creates wordpart.h and includes it in asm/word-at-a-time.h for all architectures. WORD_AT_A_TIME_CONSTANTS depends on kernel.h because of REPEA
kernel.h: removed REPEAT_BYTE from kernel.h
This patch creates wordpart.h and includes it in asm/word-at-a-time.h for all architectures. WORD_AT_A_TIME_CONSTANTS depends on kernel.h because of REPEAT_BYTE. Moving this to another header and including it where necessary allows us to not include the bloated kernel.h. Making this implicit dependency on REPEAT_BYTE explicit allows for later improvements in the lib/string.c inclusion list.
Suggested-by: Al Viro <[email protected]> Suggested-by: Andy Shevchenko <[email protected]> Signed-off-by: Tanzir Hasan <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
| #
3cd39bc3 |
| 03-Oct-2023 |
Alejandro Colomar <[email protected]> |
kernel.h: Move ARRAY_SIZE() to a separate header
Touching files so used for the kernel, forces 'make' to recompile most of the kernel.
Having those definitions in more granular files helps avoid re
kernel.h: Move ARRAY_SIZE() to a separate header
Touching files so used for the kernel, forces 'make' to recompile most of the kernel.
Having those definitions in more granular files helps avoid recompiling so much of the kernel.
Signed-off-by: Alejandro Colomar <[email protected]> Reviewed-by: Giovanni Cabiddu <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] [andy: reduced to cover only string.h for now] Signed-off-by: Andy Shevchenko <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7 |
|
| #
39ced19b |
| 14-Aug-2023 |
Andy Shevchenko <[email protected]> |
lib/vsprintf: split out sprintf() and friends
Patch series "lib/vsprintf: Rework header inclusions", v3.
Some patches that reduce the mess with the header inclusions related to vsprintf.c module.
lib/vsprintf: split out sprintf() and friends
Patch series "lib/vsprintf: Rework header inclusions", v3.
Some patches that reduce the mess with the header inclusions related to vsprintf.c module. Each patch has its own description, and has no dependencies to each other, except the collisions over modifications of the same places. Hence the series.
This patch (of 2):
kernel.h is being used as a dump for all kinds of stuff for a long time. sprintf() and friends are used in many drivers without need of the full kernel.h dependency train with it.
Here is the attempt on cleaning it up by splitting out sprintf() and friends.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Marco Elver <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: Steven Rostedt (Google) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3 |
|
| #
b229baa3 |
| 18-Jul-2023 |
Andy Shevchenko <[email protected]> |
kernel.h: split out COUNT_ARGS() and CONCATENATE() to args.h
Patch series "kernel.h: Split out a couple of macros to args.h", v4.
There are macros in kernel.h that can be used outside of that heade
kernel.h: split out COUNT_ARGS() and CONCATENATE() to args.h
Patch series "kernel.h: Split out a couple of macros to args.h", v4.
There are macros in kernel.h that can be used outside of that header. Split them to args.h and replace open coded variants.
This patch (of 4):
kernel.h is being used as a dump for all kinds of stuff for a long time. The COUNT_ARGS() and CONCATENATE() macros may be used in some places without need of the full kernel.h dependency train with it.
Here is the attempt on cleaning it up by splitting out these macros().
While at it, include new header where it's being used.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Acked-by: Steven Rostedt (Google) <[email protected]> Cc: Bjorn Helgaas <[email protected]> Acked-by: Bjorn Helgaas <[email protected]> [PCI] Cc: Brendan Higgins <[email protected]> Cc: Daniel Latypov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David Gow <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Lorenzo Pieralisi <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Masami Hiramatsu (Google) <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Sudeep Holla <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4 |
|
| #
890a3ee3 |
| 23-Mar-2023 |
Andy Shevchenko <[email protected]> |
kernel.h: split the hexadecimal related helpers to hex.h
For the sake of cleaning up the kernel.h split the hexadecimal related helpers to own header called 'hex.h'.
Link: https://lkml.kernel.org/r
kernel.h: split the hexadecimal related helpers to hex.h
For the sake of cleaning up the kernel.h split the hexadecimal related helpers to own header called 'hex.h'.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Cc: Rasmus Villemoes <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2 |
|
| #
2455f0e1 |
| 15-Feb-2023 |
Ross Zwisler <[email protected]> |
tracing: Always use canonical ftrace path
The canonical location for the tracefs filesystem is at /sys/kernel/tracing.
But, from Documentation/trace/ftrace.rst:
Before 4.1, all ftrace tracing co
tracing: Always use canonical ftrace path
The canonical location for the tracefs filesystem is at /sys/kernel/tracing.
But, from Documentation/trace/ftrace.rst:
Before 4.1, all ftrace tracing control files were within the debugfs file system, which is typically located at /sys/kernel/debug/tracing. For backward compatibility, when mounting the debugfs file system, the tracefs file system will be automatically mounted at:
/sys/kernel/debug/tracing
Many comments and Kconfig help messages in the tracing code still refer to this older debugfs path, so let's update them to avoid confusion.
Link: https://lore.kernel.org/linux-trace-kernel/[email protected]
Acked-by: Masami Hiramatsu (Google) <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Ross Zwisler <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5 |
|
| #
e5be1576 |
| 25-Apr-2022 |
Mikulas Patocka <[email protected]> |
hex2bin: make the function hex_to_bin constant-time
The function hex2bin is used to load cryptographic keys into device mapper targets dm-crypt and dm-integrity. It should take constant time indepe
hex2bin: make the function hex_to_bin constant-time
The function hex2bin is used to load cryptographic keys into device mapper targets dm-crypt and dm-integrity. It should take constant time independent on the processed data, so that concurrently running unprivileged code can't infer any information about the keys via microarchitectural convert channels.
This patch changes the function hex_to_bin so that it contains no branches and no memory accesses.
Note that this shouldn't cause performance degradation because the size of the new function is the same as the size of the old function (on x86-64) - and the new function causes no branch misprediction penalties.
I compile-tested this function with gcc on aarch64 alpha arm hppa hppa64 i386 ia64 m68k mips32 mips64 powerpc powerpc64 riscv sh4 s390x sparc32 sparc64 x86_64 and with clang on aarch64 arm hexagon i386 mips32 mips64 powerpc powerpc64 s390x sparc32 sparc64 x86_64 to verify that there are no branches in the generated code.
Signed-off-by: Mikulas Patocka <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1 |
|
| #
868e6139 |
| 27-Mar-2022 |
Keith Busch <[email protected]> |
block: move lower_48_bits() to block
The function is not generally applicable enough to be included in the core kernel header. Move it to block since it's the only subsystem using it.
Suggested-by:
block: move lower_48_bits() to block
The function is not generally applicable enough to be included in the core kernel header. Move it to block since it's the only subsystem using it.
Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Keith Busch <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v5.17, v5.17-rc8, v5.17-rc7 |
|
| #
7ee8809d |
| 03-Mar-2022 |
Keith Busch <[email protected]> |
linux/kernel: introduce lower_48_bits function
Recent data integrity field enhancements allow reference tags to be up to 48 bits. Introduce an inline helper function since this will be a repeated op
linux/kernel: introduce lower_48_bits function
Recent data integrity field enhancements allow reference tags to be up to 48 bits. Introduce an inline helper function since this will be a repeated operation.
Suggested-by: Bart Van Assche <[email protected]> Signed-off-by: Keith Busch <[email protected]> Reviewed-by: Bart Van Assche <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc6, v5.17-rc5 |
|
| #
99cf983c |
| 14-Feb-2022 |
Mark Rutland <[email protected]> |
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline which will either
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline which will either branch to a callee or return to the caller.
On such architectures, a number of constraints can conspire to make those trampolines more complicated and potentially less useful than we'd like. For example:
* Hardware and software control flow integrity schemes can require the addition of "landing pad" instructions (e.g. `BTI` for arm64), which will also be present at the "real" callee.
* Limited branch ranges can require that trampolines generate or load an address into a register and perform an indirect branch (or at least have a slow path that does so). This loses some of the benefits of having a direct branch.
* Interaction with SW CFI schemes can be complicated and fragile, e.g. requiring that we can recognise idiomatic codegen and remove indirections understand, at least until clang proves more helpful mechanisms for dealing with this.
For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we really only need to enable/disable specific preemption functions. We can achieve the same effect without a number of the pain points above by using static keys to fold early returns into the preemption functions themselves rather than in an out-of-line trampoline, effectively inlining the trampoline into the start of the function.
For arm64, this results in good code generation. For example, the dynamic_cond_resched() wrapper looks as follows when enabled. When disabled, the first `B` is replaced with a `NOP`, resulting in an early return.
| <dynamic_cond_resched>: | bti c | b <dynamic_cond_resched+0x10> // or `nop` | mov w0, #0x0 | ret | mrs x0, sp_el0 | ldr x0, [x0, #8] | cbnz x0, <dynamic_cond_resched+0x8> | paciasp | stp x29, x30, [sp, #-16]! | mov x29, sp | bl <preempt_schedule_common> | mov w0, #0x1 | ldp x29, x30, [sp], #16 | autiasp | ret
... compared to the regular form of the function:
| <__cond_resched>: | bti c | mrs x0, sp_el0 | ldr x1, [x0, #8] | cbz x1, <__cond_resched+0x18> | mov w0, #0x0 | ret | paciasp | stp x29, x30, [sp, #-16]! | mov x29, sp | bl <preempt_schedule_common> | mov w0, #0x1 | ldp x29, x30, [sp], #16 | autiasp | ret
Any architecture which implements static keys should be able to use this to implement PREEMPT_DYNAMIC with similar cost to non-inlined static calls. Since this is likely to have greater overhead than (inlined) static calls, PREEMPT_DYNAMIC is only defaulted to enabled when HAVE_PREEMPT_DYNAMIC_CALL is selected.
Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1 |
|
| #
40cbf09f |
| 20-Jan-2022 |
Andy Shevchenko <[email protected]> |
kernel.h: include a note to discourage people from including it in headers
Include a note at the top to discourage people from including it in headers.
Link: https://lkml.kernel.org/r/2021120915080
kernel.h: include a note to discourage people from including it in headers
Include a note at the top to discourage people from including it in headers.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3 |
|
| #
cead1855 |
| 22-Nov-2021 |
Eric W. Biederman <[email protected]> |
exit: Rename complete_and_exit to kthread_complete_and_exit
Update complete_and_exit to call kthread_exit instead of do_exit.
Change the name to reflect this change in functionality. All of the us
exit: Rename complete_and_exit to kthread_complete_and_exit
Update complete_and_exit to call kthread_exit instead of do_exit.
Change the name to reflect this change in functionality. All of the users of complete_and_exit are causing the current kthread to exit so this change makes it clear what is happening.
Move the implementation of kthread_complete_and_exit from kernel/exit.c to to kernel/kthread.c. As this function is kthread specific it makes most sense to live with the kthread functions.
There are no functional change.
Signed-off-by: "Eric W. Biederman" <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc2, v5.16-rc1 |
|
| #
a8b76910 |
| 10-Nov-2021 |
Valentin Schneider <[email protected]> |
preempt: Restore preemption model selection configs
Commit c597bfddc9e9 ("sched: Provide Kconfig support for default dynamic preempt mode") changed the selectable config names for the preemption mod
preempt: Restore preemption model selection configs
Commit c597bfddc9e9 ("sched: Provide Kconfig support for default dynamic preempt mode") changed the selectable config names for the preemption model. This means a config file must now select
CONFIG_PREEMPT_BEHAVIOUR=y
rather than
CONFIG_PREEMPT=y
to get a preemptible kernel. This means all arch config files would need to be updated - right now they'll all end up with the default CONFIG_PREEMPT_NONE_BEHAVIOUR.
Rather than touch a good hundred of config files, restore usage of CONFIG_PREEMPT{_NONE, _VOLUNTARY}. Make them configure: o The build-time preemption model when !PREEMPT_DYNAMIC o The default boot-time preemption model when PREEMPT_DYNAMIC
Add siblings of those configs with the _BUILD suffix to unconditionally designate the build-time preemption model (PREEMPT_DYNAMIC is built with the "highest" preemption model it supports, aka PREEMPT). Downstream configs should by now all be depending / selected by CONFIG_PREEMPTION rather than CONFIG_PREEMPT, so only a few sites need patching up.
Signed-off-by: Valentin Schneider <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Marco Elver <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
b9ad8fe7 |
| 09-Nov-2021 |
Kefeng Wang <[email protected]> |
sections: move is_kernel_inittext() into sections.h
The is_kernel_inittext() and init_kernel_text() are with same functionality, let's just keep is_kernel_inittext() and move it into sections.h, the
sections: move is_kernel_inittext() into sections.h
The is_kernel_inittext() and init_kernel_text() are with same functionality, let's just keep is_kernel_inittext() and move it into sections.h, then update all the callers.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Steven Rostedt <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
a20deb3a |
| 09-Nov-2021 |
Kefeng Wang <[email protected]> |
sections: move and rename core_kernel_data() to is_kernel_core_data()
Move core_kernel_data() into sections.h and rename it to is_kernel_core_data(), also make it return bool value, then update all
sections: move and rename core_kernel_data() to is_kernel_core_data()
Move core_kernel_data() into sections.h and rename it to is_kernel_core_data(), also make it return bool value, then update all the callers.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
e52340de |
| 09-Nov-2021 |
Stephen Rothwell <[email protected]> |
kernel.h: split out instruction pointer accessors
bottom_half.h needs _THIS_IP_ to be standalone, so split that and _RET_IP_ out from kernel.h into the new instruction_pointer.h. kernel.h directly
kernel.h: split out instruction pointer accessors
bottom_half.h needs _THIS_IP_ to be standalone, so split that and _RET_IP_ out from kernel.h into the new instruction_pointer.h. kernel.h directly needs them, so include it there and replace the include of kernel.h with this new file in bottom_half.h.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Andy Shevchenko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
d2a8ebbf |
| 09-Nov-2021 |
Andy Shevchenko <[email protected]> |
kernel.h: split out container_of() and typeof_member() macros
kernel.h is being used as a dump for all kinds of stuff for a long time. Here is the attempt cleaning it up by splitting out container_o
kernel.h: split out container_of() and typeof_member() macros
kernel.h is being used as a dump for all kinds of stuff for a long time. Here is the attempt cleaning it up by splitting out container_of() and typeof_member() macros.
For time being include new header back to kernel.h to avoid twisted indirected includes for existing users.
Note, there are _a lot_ of headers and modules that include kernel.h solely for one of these macros and this allows to unburden compiler for the twisted inclusion paths and to make new code cleaner in the future.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andy Shevchenko <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Brendan Higgins <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Laurent Pinchart <[email protected]> Cc: Mauro Carvalho Chehab <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Sakari Ailus <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Thorsten Leemhuis <[email protected]> Cc: Waiman Long <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
d2635f20 |
| 05-Nov-2021 |
Christophe Leroy <[email protected]> |
mm: create a new system state and fix core_kernel_text()
core_kernel_text() considers that until system_state in at least SYSTEM_RUNNING, init memory is valid.
But init memory is freed a few lines
mm: create a new system state and fix core_kernel_text()
core_kernel_text() considers that until system_state in at least SYSTEM_RUNNING, init memory is valid.
But init memory is freed a few lines before setting SYSTEM_RUNNING, so we have a small period of time when core_kernel_text() is wrong.
Create an intermediate system state called SYSTEM_FREEING_INIT that is set before starting freeing init memory, and use it in core_kernel_text() to report init memory invalid earlier.
Link: https://lkml.kernel.org/r/9ecfdee7dd4d741d172cb93ff1d87f1c58127c9a.1633001016.git.christophe.leroy@csgroup.eu Signed-off-by: Christophe Leroy <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Kefeng Wang <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Heiko Carstens <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3 |
|
| #
50e081b9 |
| 23-Sep-2021 |
Thomas Gleixner <[email protected]> |
sched: Make RCU nest depth distinct in __might_resched()
For !RT kernels RCU nest depth in __might_resched() is always expected to be 0, but on RT kernels it can be non zero while the preempt count
sched: Make RCU nest depth distinct in __might_resched()
For !RT kernels RCU nest depth in __might_resched() is always expected to be 0, but on RT kernels it can be non zero while the preempt count is expected to be always 0.
Instead of playing magic games in interpreting the 'preempt_offset' argument, rename it to 'offsets' and use the lower 8 bits for the expected preempt count, allow to hand in the expected RCU nest depth in the upper bits and adopt the __might_resched() code and related checks and printks.
The affected call sites are updated in subsequent steps.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
42a38756 |
| 23-Sep-2021 |
Thomas Gleixner <[email protected]> |
sched: Remove preempt_offset argument from __might_sleep()
All callers hand in 0 and never will hand in anything else.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijls
sched: Remove preempt_offset argument from __might_sleep()
All callers hand in 0 and never will hand in anything else.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
874f670e |
| 23-Sep-2021 |
Thomas Gleixner <[email protected]> |
sched: Clean up the might_sleep() underscore zoo
__might_sleep() vs. ___might_sleep() is hard to distinguish. Aside of that the three underscore variant is exposed to provide a checkpoint for resche
sched: Clean up the might_sleep() underscore zoo
__might_sleep() vs. ___might_sleep() is hard to distinguish. Aside of that the three underscore variant is exposed to provide a checkpoint for rescheduling points which are distinct from blocking points.
They are semantically a preemption point which means that scheduling is state preserving. A real blocking operation, e.g. mutex_lock(), wait*(), which cannot preserve a task state which is not equal to RUNNING.
While technically blocking on a "sleeping" spinlock in RT enabled kernels falls into the voluntary scheduling category because it has to wait until the contended spin/rw lock becomes available, the RT lock substitution code can semantically be mapped to a voluntary preemption because the RT lock substitution code and the scheduler are providing mechanisms to preserve the task state and to take regular non-lock related wakeups into account.
Rename ___might_sleep() to __might_resched() to make the distinction of these functions clear.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5 |
|
| #
c0891ac1 |
| 02-Aug-2021 |
Alexey Dobriyan <[email protected]> |
isystem: ship and use stdarg.h
Ship minimal stdarg.h (1 type, 4 macros) as <linux/stdarg.h>. stdarg.h is the only userspace header commonly used in the kernel.
GPL 2 version of <stdarg.h> can be ex
isystem: ship and use stdarg.h
Ship minimal stdarg.h (1 type, 4 macros) as <linux/stdarg.h>. stdarg.h is the only userspace header commonly used in the kernel.
GPL 2 version of <stdarg.h> can be extracted from http://archive.debian.org/debian/pool/main/g/gcc-4.2/gcc-4.2_4.2.4.orig.tar.gz
Signed-off-by: Alexey Dobriyan <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
show more ...
|