|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5 |
|
| #
36903abe |
| 15-Feb-2022 |
Arnd Bergmann <[email protected]> |
x86: remove __range_not_ok()
The __range_not_ok() helper is an x86 (and sparc64) specific interface that does roughly the same thing as __access_ok(), but with different calling conventions.
Change
x86: remove __range_not_ok()
The __range_not_ok() helper is an x86 (and sparc64) specific interface that does roughly the same thing as __access_ok(), but with different calling conventions.
Change this to use the normal interface in order for consistency as we clean up all access_ok() implementations.
This changes the limit from TASK_SIZE to TASK_SIZE_MAX, which Al points out is the right thing do do here anyway.
The callers have to use __access_ok() instead of the normal access_ok() though, because on x86 that contains a WARN_ON_IN_IRQ() check that cannot be used inside of NMI context while tracing.
The check in copy_code() is not needed any more, because this one is already done by copy_from_user_nmi().
Suggested-by: Al Viro <[email protected]> Suggested-by: Christoph Hellwig <[email protected]> Link: https://lore.kernel.org/lkml/[email protected]/ Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3 |
|
| #
b18adee4 |
| 09-Mar-2021 |
Mark Brown <[email protected]> |
stacktrace: Move documentation for arch_stack_walk_reliable() to header
Currently arch_stack_walk_reliable() is documented with an identical comment in both x86 and S/390 implementations which is a
stacktrace: Move documentation for arch_stack_walk_reliable() to header
Currently arch_stack_walk_reliable() is documented with an identical comment in both x86 and S/390 implementations which is a bit redundant. Move this to the header and convert to kerneldoc while we're at it.
Signed-off-by: Mark Brown <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Acked-by: Vasily Gorbik <[email protected]> Acked-by: Randy Dunlap <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6 |
|
| #
264c03a2 |
| 14-Sep-2020 |
Mark Brown <[email protected]> |
stacktrace: Remove reliable argument from arch_stack_walk() callback
Currently the callback passed to arch_stack_walk() has an argument called reliable passed to it to indicate if the stack entry is
stacktrace: Remove reliable argument from arch_stack_walk() callback
Currently the callback passed to arch_stack_walk() has an argument called reliable passed to it to indicate if the stack entry is reliable, a comment says that this is used by some printk() consumers. However in the current kernel none of the arch_stack_walk() implementations ever set this flag to true and the only callback implementation we have is in the generic stacktrace code which ignores the flag. It therefore appears that this flag is redundant so we can simplify and clarify things by removing it.
Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6 |
|
| #
039a7a30 |
| 17-Jul-2020 |
Josh Poimboeuf <[email protected]> |
x86/stacktrace: Fix reliable check for empty user task stacks
If a user task's stack is empty, or if it only has user regs, ORC reports it as a reliable empty stack. But arch_stack_walk_reliable()
x86/stacktrace: Fix reliable check for empty user task stacks
If a user task's stack is empty, or if it only has user regs, ORC reports it as a reliable empty stack. But arch_stack_walk_reliable() incorrectly treats it as unreliable.
That happens because the only success path for user tasks is inside the loop, which only iterates on non-empty stacks. Generally, a user task must end in a user regs frame, but an empty stack is an exception to that rule.
Thanks to commit 71c95825289f ("x86/unwind/orc: Fix error handling in __unwind_start()"), unwind_start() now sets state->error appropriately. So now for both ORC and FP unwinders, unwind_done() and !unwind_error() always means the end of the stack was successfully reached. So the success path for kthreads is no longer needed -- it can also be used for empty user tasks.
Reported-by: Wang ShaoBo <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Wang ShaoBo <[email protected]> Link: https://lkml.kernel.org/r/f136a4e5f019219cbc4f4da33b30c2f44fa65b84.1594994374.git.jpoimboe@redhat.com
show more ...
|
|
Revision tags: v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2 |
|
| #
c8e3dd86 |
| 15-Feb-2020 |
Al Viro <[email protected]> |
x86 user stack frame reads: switch to explicit __get_user()
rather than relying upon the magic in raw_copy_from_user()
Signed-off-by: Al Viro <[email protected]>
|
|
Revision tags: v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2 |
|
| #
2af7c857 |
| 22-Jul-2019 |
Eiichi Tsukata <[email protected]> |
x86/stacktrace: Prevent access_ok() warnings in arch_stack_walk_user()
When arch_stack_walk_user() is called from atomic contexts, access_ok() can trigger the following warning if compiled with CONF
x86/stacktrace: Prevent access_ok() warnings in arch_stack_walk_user()
When arch_stack_walk_user() is called from atomic contexts, access_ok() can trigger the following warning if compiled with CONFIG_DEBUG_ATOMIC_SLEEP=y.
Reproducer:
// CONFIG_DEBUG_ATOMIC_SLEEP=y # cd /sys/kernel/debug/tracing # echo 1 > options/userstacktrace # echo 1 > events/irq/irq_handler_entry/enable
WARNING: CPU: 0 PID: 2649 at arch/x86/kernel/stacktrace.c:103 arch_stack_walk_user+0x6e/0xf6 CPU: 0 PID: 2649 Comm: bash Not tainted 5.3.0-rc1+ #99 RIP: 0010:arch_stack_walk_user+0x6e/0xf6 Call Trace: <IRQ> stack_trace_save_user+0x10a/0x16d trace_buffer_unlock_commit_regs+0x185/0x240 trace_event_buffer_commit+0xec/0x330 trace_event_raw_event_irq_handler_entry+0x159/0x1e0 __handle_irq_event_percpu+0x22d/0x440 handle_irq_event_percpu+0x70/0x100 handle_irq_event+0x5a/0x8b handle_edge_irq+0x12f/0x3f0 handle_irq+0x34/0x40 do_IRQ+0xa6/0x1f0 common_interrupt+0xf/0xf </IRQ>
Fix it by calling __range_not_ok() directly instead of access_ok() as copy_from_user_nmi() does. This is fine here because the actual copy is inside a pagefault disabled region.
Reported-by: Juri Lelli <[email protected]> Signed-off-by: Eiichi Tsukata <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.3-rc1 |
|
| #
cbf5b73d |
| 11-Jul-2019 |
Eiichi Tsukata <[email protected]> |
x86/stacktrace: Prevent infinite loop in arch_stack_walk_user()
arch_stack_walk_user() checks `if (fp == frame.next_fp)` to prevent a infinite loop by self reference but it's not enogh for circular
x86/stacktrace: Prevent infinite loop in arch_stack_walk_user()
arch_stack_walk_user() checks `if (fp == frame.next_fp)` to prevent a infinite loop by self reference but it's not enogh for circular reference.
Once a lack of return address is found, there is no point to continue the loop, so break out.
Fixes: 02b67518e2b1 ("tracing: add support for userspace stacktraces in tracing/iter_ctrl") Signed-off-by: Eiichi Tsukata <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Linus Torvalds <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7 |
|
| #
3599fe12 |
| 25-Apr-2019 |
Thomas Gleixner <[email protected]> |
x86/stacktrace: Use common infrastructure
Replace the stack_trace_save*() functions with the new arch_stack_walk() interfaces.
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh
x86/stacktrace: Use common infrastructure
Replace the stack_trace_save*() functions with the new arch_stack_walk() interfaces.
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: [email protected] Cc: Steven Rostedt <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.1-rc6, v5.1-rc5 |
|
| #
c5c27a0a |
| 10-Apr-2019 |
Thomas Gleixner <[email protected]> |
x86/stacktrace: Remove the pointless ULONG_MAX marker
Terminating the last trace entry with ULONG_MAX is a completely pointless exercise and none of the consumers can rely on it because it's inconsi
x86/stacktrace: Remove the pointless ULONG_MAX marker
Terminating the last trace entry with ULONG_MAX is a completely pointless exercise and none of the consumers can rely on it because it's inconsistently implemented across architectures. In fact quite some of the callers remove the entry and adjust stack_trace.nr_entries afterwards.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Alexander Potapenko <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1 |
|
| #
96d4f267 |
| 04-Jan-2019 |
Linus Torvalds <[email protected]> |
Remove 'type' argument from access_ok() function
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old
Remove 'type' argument from access_ok() function
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand.
It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact.
A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all.
This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
There were a couple of notable cases:
- csky still had the old "verify_area()" name as an alias.
- the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it)
- microblaze used the type argument for a debug printout
but other than those oddities this should be a total no-op patch.
I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though.
Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6 |
|
| #
0c414367 |
| 18-May-2018 |
Jiri Slaby <[email protected]> |
x86/stacktrace: Do not fail for ORC with regs on stack
save_stack_trace_reliable now returns "non reliable" when there are kernel pt_regs on stack. This means an interrupt or exception happened some
x86/stacktrace: Do not fail for ORC with regs on stack
save_stack_trace_reliable now returns "non reliable" when there are kernel pt_regs on stack. This means an interrupt or exception happened somewhere down the route. It is a problem for the frame pointer unwinder, because the frame might not have been set up yet when the irq happened, so the unwinder might fail to unwind from the interrupted function.
With ORC, this is not a problem, as ORC has out-of-band data. We can find ORC data even for the IP in the interrupted function and always unwind one level up reliably.
So lift the check to apply only when CONFIG_FRAME_POINTER=y is enabled.
Signed-off-by: Jiri Slaby <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
441ccc35 |
| 18-May-2018 |
Jiri Slaby <[email protected]> |
x86/stacktrace: Clarify the reliable success paths
Make clear which path is for user tasks and for kthreads and idle tasks. This will allow easier plug-in of the ORC unwinder in the next patches.
N
x86/stacktrace: Clarify the reliable success paths
Make clear which path is for user tasks and for kthreads and idle tasks. This will allow easier plug-in of the ORC unwinder in the next patches.
Note that we added a check for unwind error to the top of the loop, so that an error is returned also for user tasks (the 'goto success' would skip the check after the loop otherwise).
Signed-off-by: Jiri Slaby <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
17426923 |
| 18-May-2018 |
Jiri Slaby <[email protected]> |
x86/stacktrace: Remove STACKTRACE_DUMP_ONCE
The stack unwinding can sometimes fail yet. Especially with the generated debug info. So do not yell at users -- live patching (the only user of this inte
x86/stacktrace: Remove STACKTRACE_DUMP_ONCE
The stack unwinding can sometimes fail yet. Especially with the generated debug info. So do not yell at users -- live patching (the only user of this interface) will inform the user about the failure gracefully.
And given this was the only user of the macro, remove the macro proper too.
Signed-off-by: Jiri Slaby <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
0797a8d0 |
| 18-May-2018 |
Jiri Slaby <[email protected]> |
x86/stacktrace: Do not unwind after user regs
Josh pointed out, that there is no way a frame can be after user regs. So remove the last unwind and the check.
Signed-off-by: Jiri Slaby <jslaby@suse.
x86/stacktrace: Do not unwind after user regs
Josh pointed out, that there is no way a frame can be after user regs. So remove the last unwind and the check.
Signed-off-by: Jiri Slaby <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/lkml/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6 |
|
| #
a9cdbe72 |
| 31-Dec-2017 |
Josh Poimboeuf <[email protected]> |
x86/dumpstack: Fix partial register dumps
The show_regs_safe() logic is wrong. When there's an iret stack frame, it prints the entire pt_regs -- most of which is random stack data -- instead of jus
x86/dumpstack: Fix partial register dumps
The show_regs_safe() logic is wrong. When there's an iret stack frame, it prints the entire pt_regs -- most of which is random stack data -- instead of just the five registers at the end.
show_regs_safe() is also poorly named: the on_stack() checks aren't for safety. Rename the function to show_regs_if_on_stack() and add a comment to explain why the checks are needed.
These issues were introduced with the "partial register dump" feature of the following commit:
b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")
That patch had gone through a few iterations of development, and the above issues were artifacts from a previous iteration of the patch where 'regs' pointed directly to the iret frame rather than to the (partially empty) pt_regs.
Tested-by: Alexander Tsoy <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Toralf Förster <[email protected]> Cc: [email protected] Fixes: b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully") Link: http://lkml.kernel.org/r/5b05b8b344f59db2d3d50dbdeba92d60f2304c54.1514736742.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.15-rc5 |
|
| #
6454b3bd |
| 18-Dec-2017 |
Josh Poimboeuf <[email protected]> |
x86/stacktrace: Make zombie stack traces reliable
Commit:
1959a60182f4 ("x86/dumpstack: Pin the target stack when dumping it")
changed the behavior of stack traces for zombies. Before that comm
x86/stacktrace: Make zombie stack traces reliable
Commit:
1959a60182f4 ("x86/dumpstack: Pin the target stack when dumping it")
changed the behavior of stack traces for zombies. Before that commit, /proc/<pid>/stack reported the last execution path of the zombie before it died:
[<ffffffff8105b877>] do_exit+0x6f7/0xa80 [<ffffffff8105bc79>] do_group_exit+0x39/0xa0 [<ffffffff8105bcf0>] __wake_up_parent+0x0/0x30 [<ffffffff8152dd09>] system_call_fastpath+0x16/0x1b [<00007fd128f9c4f9>] 0x7fd128f9c4f9 [<ffffffffffffffff>] 0xffffffffffffffff
After the commit, it just reports an empty stack trace.
The new behavior is actually probably more correct. If the stack refcount has gone down to zero, then the task has already gone through do_exit() and isn't going to run anymore. The stack could be freed at any time and is basically gone, so reporting an empty stack makes sense.
However, save_stack_trace_tsk_reliable() treats such a missing stack condition as an error. That can cause livepatch transition stalls if there are any unreaped zombies. Instead, just treat it as a reliable, empty stack.
Reported-and-tested-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: af085d9084b4 ("stacktrace/x86: add function for detecting reliable stack traces") Link: http://lkml.kernel.org/r/e4b09e630e99d0c1080528f0821fc9d9dbaeea82.1513631620.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3 |
|
| #
77072f09 |
| 29-Sep-2017 |
Vlastimil Babka <[email protected]> |
x86/stacktrace: Avoid recording save_stack_trace() wrappers
The save_stack_trace() and save_stack_trace_tsk() wrappers of __save_stack_trace() add themselves to the call stack, and thus appear in th
x86/stacktrace: Avoid recording save_stack_trace() wrappers
The save_stack_trace() and save_stack_trace_tsk() wrappers of __save_stack_trace() add themselves to the call stack, and thus appear in the recorded stacktraces. This is redundant and wasteful when we have limited space to record the useful part of the backtrace with e.g. page_owner functionality.
Fix this by making sure __save_stack_trace() is noinline (which matches the current gcc decision) and bumping the skip in the wrappers (save_stack_trace_tsk() only when called for the current task). This is similar to what was done for arm in 3683f44c42e9 ("ARM: stacktrace: avoid listing stacktrace functions in stacktrace") and is pending for arm64.
Also make sure that __save_stack_trace_reliable() doesn't get this problem in the future by marking it __always_inline (which matches current gcc decision), per Josh Poimboeuf.
Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Miroslav Benes <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6, v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10 |
|
| #
af085d90 |
| 14-Feb-2017 |
Josh Poimboeuf <[email protected]> |
stacktrace/x86: add function for detecting reliable stack traces
For live patching and possibly other use cases, a stack trace is only useful if it can be assured that it's completely reliable. Add
stacktrace/x86: add function for detecting reliable stack traces
For live patching and possibly other use cases, a stack trace is only useful if it can be assured that it's completely reliable. Add a new save_stack_trace_tsk_reliable() function to achieve that.
Note that if the target task isn't the current task, and the target task is allowed to run, then it could be writing the stack while the unwinder is reading it, resulting in possible corruption. So the caller of save_stack_trace_tsk_reliable() must ensure that the task is either 'current' or inactive.
save_stack_trace_tsk_reliable() relies on the x86 unwinder's detection of pt_regs on the stack. If the pt_regs are not user-mode registers from a syscall, then they indicate an in-kernel interrupt or exception (e.g. preemption or a page fault), in which case the stack is considered unreliable due to the nature of frame pointers.
It also relies on the x86 unwinder's detection of other issues, such as:
- corrupted stack data - stack grows the wrong way - stack walk doesn't reach the bottom - user didn't provide a large enough entries array
Such issues are reported by checking unwind_error() and !unwind_done().
Also add CONFIG_HAVE_RELIABLE_STACKTRACE so arch-independent code can determine at build time whether the function is implemented.
Signed-off-by: Josh Poimboeuf <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Acked-by: Ingo Molnar <[email protected]> # for the x86 changes Signed-off-by: Jiri Kosina <[email protected]>
show more ...
|
|
Revision tags: v4.10-rc8 |
|
| #
68db0cf1 |
| 08-Feb-2017 |
Ingo Molnar <[email protected]> |
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h>
We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which will have to be pic
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h>
We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/task_stack.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
b17b0153 |
| 08-Feb-2017 |
Ingo Molnar <[email protected]> |
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h>
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up fro
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h>
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1, v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7 |
|
| #
49a612c6 |
| 16-Sep-2016 |
Josh Poimboeuf <[email protected]> |
x86/stacktrace: Convert save_stack_trace_*() to use the new unwinder
Convert save_stack_trace_*() to use the new unwinder. dump_trace() has been deprecated.
Signed-off-by: Josh Poimboeuf <jpoimboe
x86/stacktrace: Convert save_stack_trace_*() to use the new unwinder
Convert save_stack_trace_*() to use the new unwinder. dump_trace() has been deprecated.
Signed-off-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Byungchul Park <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nilay Vaish <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/815494c627d89887db0ce56ceffd58ad16ee6c21.1474045023.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
1959a601 |
| 16-Sep-2016 |
Andy Lutomirski <[email protected]> |
x86/dumpstack: Pin the target stack when dumping it
Specifically, pin the stack in save_stack_trace_tsk() and show_trace_log_lvl().
This will prevent a crash if the target task dies before or while
x86/dumpstack: Pin the target stack when dumping it
Specifically, pin the stack in save_stack_trace_tsk() and show_trace_log_lvl().
This will prevent a crash if the target task dies before or while dumping its stack once we start freeing task stacks early.
Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Jann Horn <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/cf0082cde65d1941a996d026f2b2cdbfaca17bfa.1474003868.git.luto@kernel.org Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
cb76c939 |
| 15-Sep-2016 |
Josh Poimboeuf <[email protected]> |
x86/dumpstack: Add get_stack_info() interface
valid_stack_ptr() is buggy: it assumes that all stacks are of size THREAD_SIZE, which is not true for exception stacks. So the walk_stack() callbacks w
x86/dumpstack: Add get_stack_info() interface
valid_stack_ptr() is buggy: it assumes that all stacks are of size THREAD_SIZE, which is not true for exception stacks. So the walk_stack() callbacks will need to know the location of the beginning of the stack as well as the end.
Another issue is that in general the various features of a stack (type, size, next stack pointer, description string) are scattered around in various places throughout the stack dump code.
Encapsulate all that information in a single place with a new stack_info struct and a get_stack_info() interface.
Signed-off-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Byungchul Park <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nilay Vaish <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/8164dd0db96b7e6a279fa17ae5e6dc375eecb4a9.1473905218.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7 |
|
| #
186f4360 |
| 14-Jul-2016 |
Paul Gortmaker <[email protected]> |
x86/kernel: Audit and remove any unnecessary uses of module.h
Historically a lot of these existed because we did not have a distinction between what was modular code and what was providing support t
x86/kernel: Audit and remove any unnecessary uses of module.h
Historically a lot of these existed because we did not have a distinction between what was modular code and what was providing support to modules via EXPORT_SYMBOL and friends. That changed when we forked out support for the latter into the export.h file.
This means we should be able to reduce the usage of module.h in code that is obj-y Makefile or bool Kconfig. The advantage in doing so is that module.h itself sources about 15 other headers; adding significantly to what we feed cpp, and it can obscure what headers we are effectively using.
Since module.h was the source for init.h (for __init) and for export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance for the presence of either and replace as needed. Build testing revealed some implicit header usage that was fixed up accordingly.
Note that some bool/obj-y instances remain since module.h is the header for some exception table entry stuff, and for things like __init_or_module (code that is tossed when MODULES=n).
Signed-off-by: Paul Gortmaker <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3, v4.7-rc2, v4.7-rc1, v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5 |
|
| #
568b329a |
| 18-Feb-2016 |
Alexei Starovoitov <[email protected]> |
perf: generalize perf_callchain
. avoid walking the stack when there is no room left in the buffer . generalize get_perf_callchain() to be called from bpf helper
Signed-off-by: Alexei Starovoitov <
perf: generalize perf_callchain
. avoid walking the stack when there is no room left in the buffer . generalize get_perf_callchain() to be called from bpf helper
Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|