History log of /linux-6.15/arch/arm64/include/asm/stacktrace.h (Results 1 – 25 of 38)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1
# 7ea55715 09-Dec-2022 Ard Biesheuvel <[email protected]>

arm64: efi: Account for the EFI runtime stack in stack unwinder

The EFI runtime services run from a dedicated stack now, and so the
stack unwinder needs to be informed about this.

Acked-by: Mark Ru

arm64: efi: Account for the EFI runtime stack in stack unwinder

The EFI runtime services run from a dedicated stack now, and so the
stack unwinder needs to be informed about this.

Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>

show more ...


Revision tags: v6.1-rc8, v6.1-rc7, v6.1-rc6
# 4585a934 17-Nov-2022 Mark Rutland <[email protected]>

arm64: move on_thread_stack() to <asm/stacktrace.h>

Currently on_thread_stack() is defined in <asm/processor.h>, depending
upon definitiong from <asm/stacktrace.h> despite this header not being
incl

arm64: move on_thread_stack() to <asm/stacktrace.h>

Currently on_thread_stack() is defined in <asm/processor.h>, depending
upon definitiong from <asm/stacktrace.h> despite this header not being
included. This ends up being fragile, and any user of on_thread_stack()
must include both <asm/processor.h> and <asm/stacktrace.h>.

We organised things this way due to header dependencies back in commit:

0b3e336601b82c6a ("arm64: Add support for STACKLEAK gcc plugin")

... but now that we no longer use current_top_of_stack(), and given that
stackleak includes <asm/stacktrace.h> via <linux/stackleak.h>, we no
longer need the definition to live in <asm/processor.h>.

Move on_thread_stack() to <asm/stacktrace.h>, where all its dependencies
are guaranteed to be defined. This requires having arm64's irq.c
explicitly include <asm/stacktrace.h>, and I've taken the opportunity to
sort the includes, which were slightly out of order.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4
# 8df13730 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: track all stack boundaries explicitly

Currently we call an on_accessible_stack() callback for each step of the
unwinder, requiring redundant work to be performed in the core of th

arm64: stacktrace: track all stack boundaries explicitly

Currently we call an on_accessible_stack() callback for each step of the
unwinder, requiring redundant work to be performed in the core of the
unwind loop (e.g. disabling preemption around accesses to per-cpu
variables containing stack boundaries). To prevent unwind loops which go
through a stack multiple times, we have to track the set of unwound
stacks, requiring a stack_type enum which needs to cater for all the
stacks of all possible callees. To prevent loops within a stack, we must
track the prior FP values.

This patch reworks the unwinder to minimize the work in the core of the
unwinder, and to remove the need for the stack_type enum. The set of
accessible stacks (and their boundaries) are determined at the start of
the unwind, and the current stack is tracked during the unwind, with
completed stacks removed from the set of accessible stacks. This makes
the boundary checks more accurate (e.g. detecting overlapped frame
records), and removes the need for separate tracking of the prior FP and
visited stacks.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# d1f684e4 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: rework stack boundary discovery

In subsequent patches we'll want to acquire the stack boundaries
ahead-of-time, and we'll need to be able to acquire the relevant
stack_info regard

arm64: stacktrace: rework stack boundary discovery

In subsequent patches we'll want to acquire the stack boundaries
ahead-of-time, and we'll need to be able to acquire the relevant
stack_info regardless of whether we have an object the happens to be on
the stack.

This patch replaces the on_XXX_stack() helpers with stackinfo_get_XXX()
helpers, with the caller being responsible for the checking whether an
object is on a relevant stack. For the moment this is moved into the
on_accessible_stack() functions, making these slightly larger;
subsequent patches will remove the on_accessible_stack() functions and
simplify the logic.

The on_irq_stack() and on_task_stack() helpers are kept as these are
used by IRQ entry sequences and stackleak respectively. As they're only
used as predicates, the stack_info pointer parameter is removed in both
cases.

As the on_accessible_stack() functions are always passed a non-NULL info
pointer, these now update info unconditionally. When updating the type
to STACK_TYPE_UNKNOWN, the low/high bounds are also modified, but as
these will not be consumed this should have no adverse affect.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 75758d51 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: move SDEI stack helpers to stacktrace code

For clarity and ease of maintenance, it would be helpful for all the
stack helpers to be in the same place.

Move the SDEI stack helpers

arm64: stacktrace: move SDEI stack helpers to stacktrace code

For clarity and ease of maintenance, it would be helpful for all the
stack helpers to be in the same place.

Move the SDEI stack helpers into the stacktrace code where all the other
stack helpers live.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: James Morse <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19
# 4e00532f 27-Jul-2022 Marc Zyngier <[email protected]>

KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions

Having multiple versions of on_accessible_stack() (one per unwinder)
makes it very hard to reason about what is used where due

KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions

Having multiple versions of on_accessible_stack() (one per unwinder)
makes it very hard to reason about what is used where due to the
complexity of the various includes, the forward declarations, and
the reliance on everything being 'inline'.

Instead, move the code back where it should be. Each unwinder
implements:

- on_accessible_stack() as well as the helpers it depends on,

- unwind()/unwind_next(), as they pass on_accessible_stack as
a parameter to unwind_next_common() (which is the only common
code here)

This hardly results in any duplication, and makes it much
easier to reason about the code.

Signed-off-by: Marc Zyngier <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Tested-by: Kalesh Singh <[email protected]>
Reviewed-by: Oliver Upton <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# f51e7146 26-Jul-2022 Kalesh Singh <[email protected]>

arm64: stacktrace: Factor out common unwind()

Move unwind() to stacktrace/common.h, and as a result
the kernel unwind_next() to asm/stacktrace.h. This allow
reusing unwind() in the implementation of

arm64: stacktrace: Factor out common unwind()

Move unwind() to stacktrace/common.h, and as a result
the kernel unwind_next() to asm/stacktrace.h. This allow
reusing unwind() in the implementation of the nVHE HYP
stack unwinder, later in the series.

Signed-off-by: Kalesh Singh <[email protected]>
Reviewed-by: Fuad Tabba <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Tested-by: Fuad Tabba <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 15a59f19 26-Jul-2022 Kalesh Singh <[email protected]>

arm64: stacktrace: Factor out on_accessible_stack_common()

Move common on_accessible_stack checks to stacktrace/common.h. This is
used in the implementation of the nVHE hypervisor unwinder later in

arm64: stacktrace: Factor out on_accessible_stack_common()

Move common on_accessible_stack checks to stacktrace/common.h. This is
used in the implementation of the nVHE hypervisor unwinder later in
this series.

Signed-off-by: Kalesh Singh <[email protected]>
Reviewed-by: Fuad Tabba <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Tested-by: Fuad Tabba <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


# 6bf212c8 26-Jul-2022 Kalesh Singh <[email protected]>

arm64: stacktrace: Add shared header for common stack unwinding code

In order to reuse the arm64 stack unwinding logic for the nVHE
hypervisor stack, move the common code to a shared header
(arch/ar

arm64: stacktrace: Add shared header for common stack unwinding code

In order to reuse the arm64 stack unwinding logic for the nVHE
hypervisor stack, move the common code to a shared header
(arch/arm64/include/asm/stacktrace/common.h).

The nVHE hypervisor cannot safely link against kernel code, so we
make use of the shared header to avoid duplicated logic later in
this series.

Signed-off-by: Kalesh Singh <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Reviewed-by: Fuad Tabba <[email protected]>
Tested-by: Fuad Tabba <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


Revision tags: v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3
# 96bb1530 13-Apr-2022 Mark Rutland <[email protected]>

arm64: stacktrace: make struct stackframe private to stacktrace.c

Now that arm64 uses arch_stack_walk() consistently, struct stackframe is
only used within stacktrace.c. To make it easier to read an

arm64: stacktrace: make struct stackframe private to stacktrace.c

Now that arm64 uses arch_stack_walk() consistently, struct stackframe is
only used within stacktrace.c. To make it easier to read and maintain
this code, it would be nicer if the definition were there too.

Move the definition into stacktrace.c.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviwed-by: Mark Brown <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]> for the series.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v5.18-rc2, v5.18-rc1
# 0f8f8030 22-Mar-2022 Alexei Starovoitov <[email protected]>

Revert "arm64: rethook: Add arm64 rethook implementation"

This reverts commit 83acdce6894908337ca82973149d9709d28204d7.

Signed-off-by: Alexei Starovoitov <[email protected]>


Revision tags: v5.17
# 83acdce6 15-Mar-2022 Masami Hiramatsu <[email protected]>

arm64: rethook: Add arm64 rethook implementation

Add rethook arm64 implementation. Most of the code has been copied from
kretprobes on arm64.

Signed-off-by: Masami Hiramatsu <[email protected]>
S

arm64: rethook: Add arm64 rethook implementation

Add rethook arm64 implementation. Most of the code has been copied from
kretprobes on arm64.

Signed-off-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
Tested-by: Steven Rostedt (Google) <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Link: https://lore.kernel.org/bpf/164735287344.1084943.9787335632585653418.stgit@devnote2

show more ...


Revision tags: v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4
# d2d1d264 29-Nov-2021 Mark Rutland <[email protected]>

arm64: Make some stacktrace functions private

Now that open-coded stack unwinds have been converted to
arch_stack_walk(), we no longer need to expose any of unwind_frame(),
walk_stackframe(), or sta

arm64: Make some stacktrace functions private

Now that open-coded stack unwinds have been converted to
arch_stack_walk(), we no longer need to expose any of unwind_frame(),
walk_stackframe(), or start_backtrace() outside of stacktrace.c.

Make those functions private to stacktrace.c, removing their prototypes
from <asm/stacktrace.h> and marking them static.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 1e5428b2 29-Nov-2021 Mark Rutland <[email protected]>

arm64: Add comment for stack_info::kr_cur

We added stack_info::kr_cur in commit:

cd9bc2c9258816dc ("arm64: Recover kretprobe modified return address in stacktrace")

... but didn't add anything i

arm64: Add comment for stack_info::kr_cur

We added stack_info::kr_cur in commit:

cd9bc2c9258816dc ("arm64: Recover kretprobe modified return address in stacktrace")

... but didn't add anything in the corresponding comment block.

For consistency, add a corresponding comment.

Signed-off-by: Mark Rutland <[email protected]>
Reviwed-by: Mark Brown <[email protected]>
Reviewed-by: Masami Hiramatsu <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15
# c6d3cd32 29-Oct-2021 Mark Rutland <[email protected]>

arm64: ftrace: use HAVE_FUNCTION_GRAPH_RET_ADDR_PTR

When CONFIG_FUNCTION_GRAPH_TRACER is selected and the function graph
tracer is in use, unwind_frame() may erroneously associate a traced
function

arm64: ftrace: use HAVE_FUNCTION_GRAPH_RET_ADDR_PTR

When CONFIG_FUNCTION_GRAPH_TRACER is selected and the function graph
tracer is in use, unwind_frame() may erroneously associate a traced
function with an incorrect return address. This can happen when starting
an unwind from a pt_regs, or when unwinding across an exception
boundary.

This can be seen when recording with perf while the function graph
tracer is in use. For example:

| # echo function_graph > /sys/kernel/debug/tracing/current_tracer
| # perf record -g -e raw_syscalls:sys_enter:k /bin/true
| # perf report

... reports the callchain erroneously as:

| el0t_64_sync
| el0t_64_sync_handler
| el0_svc_common.constprop.0
| perf_callchain
| get_perf_callchain
| syscall_trace_enter
| syscall_trace_enter

... whereas when the function graph tracer is not in use, it reports:

| el0t_64_sync
| el0t_64_sync_handler
| el0_svc
| do_el0_svc
| el0_svc_common.constprop.0
| syscall_trace_enter
| syscall_trace_enter

The underlying problem is that ftrace_graph_get_ret_stack() takes an
index offset from the most recent entry added to the fgraph return
stack. We start an unwind at offset 0, and increment the offset each
time we encounter a rewritten return address (i.e. when we see
`return_to_handler`). This is broken in two cases:

1) Between creating a pt_regs and starting the unwind, function calls
may place entries on the stack, leaving an arbitrary offset which we
can only determine by performing a full unwind from the caller of the
unwind code (and relying on none of the unwind code being
instrumented).

This can result in erroneous entries being reported in a backtrace
recorded by perf or kfence when the function graph tracer is in use.
Currently show_regs() is unaffected as dump_backtrace() performs an
initial unwind.

2) When unwinding across an exception boundary (whether continuing an
unwind or starting a new unwind from regs), we currently always skip
the LR of the interrupted context. Where this was live and contained
a rewritten address, we won't consume the corresponding fgraph ret
stack entry, leaving subsequent entries off-by-one.

This can result in erroneous entries being reported in a backtrace
performed by any in-kernel unwinder when that backtrace crosses an
exception boundary, with entries after the boundary being reported
incorrectly. This includes perf, kfence, show_regs(), panic(), etc.

To fix this, we need to be able to uniquely identify each rewritten
return address such that we can map this back to the original return
address. We can use HAVE_FUNCTION_GRAPH_RET_ADDR_PTR to associate
each rewritten return address with a unique location on the stack. As
the return address is passed in the LR (and so is not guaranteed a
unique location in memory), we use the FP upon entry to the function
(i.e. the address of the caller's frame record) as the return address
pointer. Any nested call will have a different FP value as the caller
must create its own frame record and update FP to point to this.

Since ftrace_graph_ret_addr() requires the return address with the PAC
stripped, the stripping of the PAC is moved before the fixup of the
rewritten address. As we would unconditionally strip the PAC, moving
this earlier is not harmful, and we can avoid a redundant strip in the
return address fixup code.

I've tested this with the perf case above, the ftrace selftests, and
a number of ad-hoc unwinder tests. The tests all pass, and I have seen
no unexpected behaviour as a result of this change. I've tested with
pointer authentication under QEMU TCG where magic-sysrq+l correctly
recovers the original return addresses.

Note that this doesn't fix the issue of skipping a live LR at an
exception boundary, which is a more general problem and requires more
substantial rework. Were we to consume the LR in all cases this would
result in warnings where the interrupted context's LR contains
`return_to_handler`, but the FP has been altered, e.g.

| func:
| <--- ftrace entry ---> // logs FP & LR, rewrites LR
| STP FP, LR, [SP, #-16]!
| MOV FP, SP
| <--- INTERRUPT --->

... as ftrace_graph_get_ret_stack() fill not find a matching entry,
triggering the WARN_ON_ONCE() in unwind_frame().

Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v5.15-rc7
# cd9bc2c9 21-Oct-2021 Masami Hiramatsu <[email protected]>

arm64: Recover kretprobe modified return address in stacktrace

Since the kretprobe replaces the function return address with
the kretprobe_trampoline on the stack, stack unwinder shows it
instead of

arm64: Recover kretprobe modified return address in stacktrace

Since the kretprobe replaces the function return address with
the kretprobe_trampoline on the stack, stack unwinder shows it
instead of the correct return address.

This checks whether the next return address is the
__kretprobe_trampoline(), and if so, try to find the correct
return address from the kretprobe instance list. For this purpose
this adds 'kr_cur' loop cursor to memorize the current kretprobe
instance.

With this fix, now arm64 can enable
CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE, and pass the
kprobe self tests.

Signed-off-by: Masami Hiramatsu <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>

show more ...


Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5
# 8d5903f4 02-Aug-2021 Mark Rutland <[email protected]>

arm64: stacktrace: fix comment

Due to a copy-paste error, we describe struct stackframe::pc as a
snapshot of the `fp` field rather than the `lr` field.

Fix the comment.

Signed-off-by: Mark Rutland

arm64: stacktrace: fix comment

Due to a copy-paste error, we describe struct stackframe::pc as a
snapshot of the `fp` field rather than the `lr` field.

Fix the comment.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4
# 76734d26 26-May-2021 Peter Collingbourne <[email protected]>

arm64: Change the on_*stack functions to take a size argument

unwind_frame() was previously implicitly checking that the frame
record is in bounds of the stack by enforcing that FP is both aligned
t

arm64: Change the on_*stack functions to take a size argument

unwind_frame() was previously implicitly checking that the frame
record is in bounds of the stack by enforcing that FP is both aligned
to 16 and in bounds of the stack. Once the FP alignment requirement
is relaxed to 8 this will not be sufficient because it does not
account for the case where FP points to 8 bytes before the end of the
stack.

Make the check explicit by changing the on_*stack functions to take a
size argument and adjusting the callers to pass the appropriate sizes.

Signed-off-by: Peter Collingbourne <[email protected]>
Link: https://linux-review.googlesource.com/id/Ib7a3eb3eea41b0687ffaba045ceb2012d077d8b4
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4
# b07f3499 19-Mar-2021 Mark Brown <[email protected]>

arm64: stacktrace: Move start_backtrace() out of the header

Currently start_backtrace() is a static inline function in the header.
Since it really shouldn't be sufficiently performance critical that

arm64: stacktrace: Move start_backtrace() out of the header

Currently start_backtrace() is a static inline function in the header.
Since it really shouldn't be sufficiently performance critical that we
actually need to have it inlined move it into a C file, this will save
anyone else scratching their head about why it is defined in the header.
As far as I can see it's only there because it was factored out of the
various callers.

Signed-off-by: Mark Brown <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6
# baa2cd41 14-Sep-2020 Mark Brown <[email protected]>

arm64: stacktrace: Make stack walk callback consistent with generic code

As with the generic arch_stack_walk() code the arm64 stack walk code takes
a callback that is called per stack frame. Current

arm64: stacktrace: Make stack walk callback consistent with generic code

As with the generic arch_stack_walk() code the arm64 stack walk code takes
a callback that is called per stack frame. Currently the arm64 code always
passes a struct stackframe to the callback and the generic code just passes
the pc, however none of the users ever reference anything in the struct
other than the pc value. The arm64 code also uses a return type of int
while the generic code uses a return type of bool though in both cases the
return value is a boolean value and the sense is inverted between the two.

In order to reduce code duplication when arm64 is converted to use
arch_stack_walk() change the signature and return sense of the arm64
specific callback to match that of the generic code.

Signed-off-by: Mark Brown <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1
# c7689837 09-Jun-2020 Dmitry Safonov <[email protected]>

arm64: add loglvl to dump_backtrace()

Currently, the log-level of show_stack() depends on a platform
realization. It creates situations where the headers are printed with
lower log level or higher

arm64: add loglvl to dump_backtrace()

Currently, the log-level of show_stack() depends on a platform
realization. It creates situations where the headers are printed with
lower log level or higher than the stacktrace (depending on a platform or
user).

Furthermore, it forces the logic decision from user to an architecture
side. In result, some users as sysrq/kdb/etc are doing tricks with
temporary rising console_loglevel while printing their messages. And in
result it not only may print unwanted messages from other CPUs, but also
omit printing at all in the unlucky case where the printk() was deferred.

Introducing log-level parameter and KERN_UNSUPPRESSED [1] seems an easier
approach than introducing more printk buffers. Also, it will consolidate
printings with headers.

Add log level argument to dump_backtrace() as a preparation for
introducing show_stack_loglvl().

[1]: https://lore.kernel.org/lkml/[email protected]/T/#u

Signed-off-by: Dmitry Safonov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Russell King <[email protected]>
Cc: Will Deacon <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>

show more ...


Revision tags: v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5
# bd4298c7 08-May-2020 Yunfeng Ye <[email protected]>

arm64: stacktrace: Factor out some common code into on_stack()

There are some common codes for stack checking, so factors it out into
the function on_stack().

No functional change.

Signed-off-by:

arm64: stacktrace: Factor out some common code into on_stack()

There are some common codes for stack checking, so factors it out into
the function on_stack().

No functional change.

Signed-off-by: Yunfeng Ye <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2
# 592700f0 02-Jul-2019 Mark Rutland <[email protected]>

arm64: stacktrace: Better handle corrupted stacks

The arm64 stacktrace code is careful to only dereference frame records
in valid stack ranges, ensuring that a corrupted frame record won't
result in

arm64: stacktrace: Better handle corrupted stacks

The arm64 stacktrace code is careful to only dereference frame records
in valid stack ranges, ensuring that a corrupted frame record won't
result in a faulting access.

However, it's still possible for corrupt frame records to result in
infinite loops in the stacktrace code, which is also undesirable.

This patch ensures that we complete a stacktrace in finite time, by
keeping track of which stacks we have already completed unwinding, and
verifying that if the next frame record is on the same stack, it is at a
higher address.

As this has turned out to be particularly subtle, comments are added to
explain the procedure.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: James Morse <[email protected]>
Tested-by: James Morse <[email protected]>
Acked-by: Dave Martin <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Cc: Tengfei Fan <[email protected]>
Signed-off-by: Will Deacon <[email protected]>

show more ...


# f3dcbe67 02-Jul-2019 Dave Martin <[email protected]>

arm64: stacktrace: Factor out backtrace initialisation

Some common code is required by each stacktrace user to initialise
struct stackframe before the first call to unwind_frame().

In preparation f

arm64: stacktrace: Factor out backtrace initialisation

Some common code is required by each stacktrace user to initialise
struct stackframe before the first call to unwind_frame().

In preparation for adding to the common code, this patch factors it
out into a separate function start_backtrace(), and modifies the
stacktrace callers appropriately.

No functional change.

Signed-off-by: Dave Martin <[email protected]>
[Mark: drop tsk argument, update more callsites]
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: James Morse <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>

show more ...


# 8caa6e2b 02-Jul-2019 Dave Martin <[email protected]>

arm64: stacktrace: Constify stacktrace.h functions

on_accessible_stack() and on_task_stack() shouldn't (and don't)
modify their task argument, so it can be const.

This patch adds the appropriate mo

arm64: stacktrace: Constify stacktrace.h functions

on_accessible_stack() and on_task_stack() shouldn't (and don't)
modify their task argument, so it can be const.

This patch adds the appropriate modifiers. Whitespace violations in the
parameter lists are fixed at the same time.

No functional change.

Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Dave Martin <[email protected]>
[Mark: fixup const location, whitespace]
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>

show more ...


12