History log of /linux-6.15/arch/arm64/kernel/stacktrace.c (Results 1 – 25 of 102)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3
# 65ac33be 11-Dec-2024 Mark Rutland <[email protected]>

arm64: stacktrace: Don't WARN when unwinding other tasks

The arm64 stacktrace code has a few error conditions where a
WARN_ON_ONCE() is triggered before the stacktrace is terminated and an
error is

arm64: stacktrace: Don't WARN when unwinding other tasks

The arm64 stacktrace code has a few error conditions where a
WARN_ON_ONCE() is triggered before the stacktrace is terminated and an
error is returned to the caller. The conditions shouldn't be triggered
when unwinding the current task, but it is possible to trigger these
when unwinding another task which is not blocked, as the stack of that
task is concurrently modified. Kent reports that these warnings can be
triggered while running filesystem tests on bcachefs, which calls the
stacktrace code directly.

To produce a meaningful stacktrace of another task, the task in question
should be blocked, but the stacktrace code is expected to be robust to
cases where it is not blocked. Note that this is purely about not
unuduly scaring the user and/or crashing the kernel; stacktraces in such
cases are meaningless and may leak kernel secrets from the stack of the
task being unwound.

Ideally we'd pin the task in a blocked state during the unwind, as we do
for /proc/${PID}/wchan since commit:

42a20f86dc19f928 ("sched: Add wrapper for get_wchan() to keep task blocked")

... but a bunch of places don't do that, notably /proc/${PID}/stack,
where we don't pin the task in a blocked state, but do restrict the
output to privileged users since commit:

f8a00cef17206ecd ("proc: restrict kernel stack dumps to root")

... and so it's possible to trigger these warnings accidentally, e.g. by
reading /proc/*/stack (as root):

| for n in $(seq 1 10); do
| while true; do cat /proc/*/stack > /dev/null 2>&1; done &
| done
| ------------[ cut here ]------------
| WARNING: CPU: 3 PID: 166 at arch/arm64/kernel/stacktrace.c:207 arch_stack_walk+0x1c8/0x370
| Modules linked in:
| CPU: 3 UID: 0 PID: 166 Comm: cat Not tainted 6.13.0-rc2-00003-g3dafa7a7925d #2
| Hardware name: linux,dummy-virt (DT)
| pstate: 81400005 (Nzcv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
| pc : arch_stack_walk+0x1c8/0x370
| lr : arch_stack_walk+0x1b0/0x370
| sp : ffff800080773890
| x29: ffff800080773930 x28: fff0000005c44500 x27: fff00000058fa038
| x26: 000000007ffff000 x25: 0000000000000000 x24: 0000000000000000
| x23: ffffa35a8d9600ec x22: 0000000000000000 x21: fff00000043a33c0
| x20: ffff800080773970 x19: ffffa35a8d960168 x18: 0000000000000000
| x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
| x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
| x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
| x8 : ffff8000807738e0 x7 : ffff8000806e3800 x6 : ffff8000806e3818
| x5 : ffff800080773920 x4 : ffff8000806e4000 x3 : ffff8000807738e0
| x2 : 0000000000000018 x1 : ffff8000806e3800 x0 : 0000000000000000
| Call trace:
| arch_stack_walk+0x1c8/0x370 (P)
| stack_trace_save_tsk+0x8c/0x108
| proc_pid_stack+0xb0/0x134
| proc_single_show+0x60/0x120
| seq_read_iter+0x104/0x438
| seq_read+0xf8/0x140
| vfs_read+0xc4/0x31c
| ksys_read+0x70/0x108
| __arm64_sys_read+0x1c/0x28
| invoke_syscall+0x48/0x104
| el0_svc_common.constprop.0+0x40/0xe0
| do_el0_svc+0x1c/0x28
| el0_svc+0x30/0xcc
| el0t_64_sync_handler+0x10c/0x138
| el0t_64_sync+0x198/0x19c
| ---[ end trace 0000000000000000 ]---

Fix this by only warning when unwinding the current task. When unwinding
another task the error conditions will be handled by returning an error
without producing a warning.

The two warnings in kunwind_next_frame_record_meta() were added recently
as part of commit:

c2c6b27b5aa14fa2 ("arm64: stacktrace: unwind exception boundaries")

The warning when recovering the fgraph return address has changed form
many times, but was originally introduced back in commit:

9f416319f40cd857 ("arm64: fix unwind_frame() for filtered out fn for function graph tracing")

Fixes: c2c6b27b5aa1 ("arm64: stacktrace: unwind exception boundaries")
Fixes: 9f416319f40c ("arm64: fix unwind_frame() for filtered out fn for function graph tracing")
Signed-off-by: Mark Rutland <[email protected]>
Reported-by: Kent Overstreet <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 32ed1205 11-Dec-2024 Mark Rutland <[email protected]>

arm64: stacktrace: Skip reporting LR at exception boundaries

Aishwarya reports that warnings are sometimes seen when running the
ftrace kselftests, e.g.

| WARNING: CPU: 5 PID: 2066 at arch/arm64/ke

arm64: stacktrace: Skip reporting LR at exception boundaries

Aishwarya reports that warnings are sometimes seen when running the
ftrace kselftests, e.g.

| WARNING: CPU: 5 PID: 2066 at arch/arm64/kernel/stacktrace.c:141 arch_stack_walk+0x4a0/0x4c0
| Modules linked in:
| CPU: 5 UID: 0 PID: 2066 Comm: ftracetest Not tainted 6.13.0-rc2 #2
| Hardware name: linux,dummy-virt (DT)
| pstate: 604000c5 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : arch_stack_walk+0x4a0/0x4c0
| lr : arch_stack_walk+0x248/0x4c0
| sp : ffff800083643d20
| x29: ffff800083643dd0 x28: ffff00007b891400 x27: ffff00007b891928
| x26: 0000000000000001 x25: 00000000000000c0 x24: ffff800082f39d80
| x23: ffff80008003ee8c x22: ffff80008004baa8 x21: ffff8000800533e0
| x20: ffff800083643e10 x19: ffff80008003eec8 x18: 0000000000000000
| x17: 0000000000000000 x16: ffff800083640000 x15: 0000000000000000
| x14: 02a37a802bbb8a92 x13: 00000000000001a9 x12: 0000000000000001
| x11: ffff800082ffad60 x10: ffff800083643d20 x9 : ffff80008003eed0
| x8 : ffff80008004baa8 x7 : ffff800086f2be80 x6 : ffff0000057cf000
| x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffff800086f2b690
| x2 : ffff80008004baa8 x1 : ffff80008004baa8 x0 : ffff80008004baa8
| Call trace:
| arch_stack_walk+0x4a0/0x4c0 (P)
| arch_stack_walk+0x248/0x4c0 (L)
| profile_pc+0x44/0x80
| profile_tick+0x50/0x80 (F)
| tick_nohz_handler+0xcc/0x160 (F)
| __hrtimer_run_queues+0x2ac/0x340 (F)
| hrtimer_interrupt+0xf4/0x268 (F)
| arch_timer_handler_virt+0x34/0x60 (F)
| handle_percpu_devid_irq+0x88/0x220 (F)
| generic_handle_domain_irq+0x34/0x60 (F)
| gic_handle_irq+0x54/0x140 (F)
| call_on_irq_stack+0x24/0x58 (F)
| do_interrupt_handler+0x88/0x98
| el1_interrupt+0x34/0x68 (F)
| el1h_64_irq_handler+0x18/0x28
| el1h_64_irq+0x6c/0x70
| queued_spin_lock_slowpath+0x78/0x460 (P)

The warning in question is:

WARN_ON_ONCE(state->common.pc == orig_pc))

... in kunwind_recover_return_address(), which is triggered when
return_to_handler() is encountered in the trace, but
ftrace_graph_ret_addr() cannot find a corresponding original return
address on the fgraph return stack.

This happens because the stacktrace code encounters an exception
boundary where the LR was not live at the time of the exception, but the
LR happens to contain return_to_handler(); either because the task
recently returned there, or due to unfortunate usage of the LR at a
scratch register. In such cases attempts to recover the return address
via ftrace_graph_ret_addr() may fail, triggering the WARN_ON_ONCE()
above and aborting the unwind (hence the stacktrace terminating after
reporting the PC at the time of the exception).

Handling unreliable LR values in these cases is likely to require some
larger rework, so for the moment avoid this problem by restoring the old
behaviour of skipping the LR at exception boundaries, which the
stacktrace code did prior to commit:

c2c6b27b5aa14fa2 ("arm64: stacktrace: unwind exception boundaries")

This commit is effectively a partial revert, keeping the structures and
logic to explicitly identify exception boundaries while still skipping
reporting of the LR. The logic to explicitly identify exception
boundaries is still useful for general robustness and as a building
block for future support for RELIABLE_STACKTRACE.

Fixes: c2c6b27b5aa1 ("arm64: stacktrace: unwind exception boundaries")
Signed-off-by: Mark Rutland <[email protected]>
Reported-by: Aishwarya TCV <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4
# c2c6b27b 17-Oct-2024 Mark Rutland <[email protected]>

arm64: stacktrace: unwind exception boundaries

When arm64's stack unwinder encounters an exception boundary, it uses
the pt_regs::stackframe created by the entry code, which has a copy of
the PC and

arm64: stacktrace: unwind exception boundaries

When arm64's stack unwinder encounters an exception boundary, it uses
the pt_regs::stackframe created by the entry code, which has a copy of
the PC and FP at the time the exception was taken. The unwinder doesn't
know anything about pt_regs, and reports the PC from the stackframe, but
does not report the LR.

The LR is only guaranteed to contain the return address at function call
boundaries, and can be used as a scratch register at other times, so the
LR at an exception boundary may or may not be a legitimate return
address. It would be useful to report the LR value regardless, as it can
be helpful when debugging, and in future it will be helpful for reliable
stacktrace support.

This patch changes the way we unwind across exception boundaries,
allowing both the PC and LR to be reported. The entry code creates a
frame_record_meta structure embedded within pt_regs, which the unwinder
uses to find the pt_regs. The unwinder can then extract pt_regs::pc and
pt_regs::lr as two separate unwind steps before continuing with a
regular walk of frame records.

When a PC is unwound from pt_regs::lr, dump_backtrace() will log this
with an "L" marker so that it can be identified easily. For example,
an unwind across an exception boundary will appear as follows:

| el1h_64_irq+0x6c/0x70
| _raw_spin_unlock_irqrestore+0x10/0x60 (P)
| __aarch64_insn_write+0x6c/0x90 (L)
| aarch64_insn_patch_text_nosync+0x28/0x80

... with a (P) entry for pt_regs::pc, and an (L) entry for pt_regs:lr.

Note that the LR may be stale at the point of the exception, for example,
shortly after a return:

| el1h_64_irq+0x6c/0x70
| default_idle_call+0x34/0x180 (P)
| default_idle_call+0x28/0x180 (L)
| do_idle+0x204/0x268

... where the LR points a few instructions before the current PC.

This plays nicely with all the other unwind metadata tracking. With the
ftrace_graph profiler enabled globally, and kretprobes installed on
generic_handle_domain_irq() and do_interrupt_handler(), a backtrace triggered
by magic-sysrq + L reports:

| Call trace:
| show_stack+0x20/0x40 (CF)
| dump_stack_lvl+0x60/0x80 (F)
| dump_stack+0x18/0x28
| nmi_cpu_backtrace+0xfc/0x140
| nmi_trigger_cpumask_backtrace+0x1c8/0x200
| arch_trigger_cpumask_backtrace+0x20/0x40
| sysrq_handle_showallcpus+0x24/0x38 (F)
| __handle_sysrq+0xa8/0x1b0 (F)
| handle_sysrq+0x38/0x50 (F)
| pl011_int+0x460/0x5a8 (F)
| __handle_irq_event_percpu+0x60/0x220 (F)
| handle_irq_event+0x54/0xc0 (F)
| handle_fasteoi_irq+0xa8/0x1d0 (F)
| generic_handle_domain_irq+0x34/0x58 (F)
| gic_handle_irq+0x54/0x140 (FK)
| call_on_irq_stack+0x24/0x58 (F)
| do_interrupt_handler+0x88/0xa0
| el1_interrupt+0x34/0x68 (FK)
| el1h_64_irq_handler+0x18/0x28
| el1h_64_irq+0x6c/0x70
| default_idle_call+0x34/0x180 (P)
| default_idle_call+0x28/0x180 (L)
| do_idle+0x204/0x268
| cpu_startup_entry+0x3c/0x50 (F)
| rest_init+0xe4/0xf0
| start_kernel+0x744/0x750
| __primary_switched+0x88/0x98

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 8094df1c 17-Oct-2024 Mark Rutland <[email protected]>

arm64: stacktrace: report recovered PCs

When analysing a stacktrace it can be useful to know whether an unwound
PC has been rewritten by fgraph or kretprobes, as in some situations
these may be susp

arm64: stacktrace: report recovered PCs

When analysing a stacktrace it can be useful to know whether an unwound
PC has been rewritten by fgraph or kretprobes, as in some situations
these may be suspect or be known to be unreliable.

This patch adds flags to track when an unwind entry has recovered the PC
from fgraph and/or kretprobes, and updates dump_backtrace() to log when
this is the case.

The flags recorded are:

"F" - the PC was recovered from fgraph
"K" - the PC was recovered from kretprobes

These flags are recorded and logged in addition to the original source
of the unwound PC.

For example, with the ftrace_graph profiler enabled globally, and
kretprobes installed on generic_handle_domain_irq() and
do_interrupt_handler(), a backtrace triggered by magic-sysrq + L
reports:

| Call trace:
| show_stack+0x20/0x40 (CF)
| dump_stack_lvl+0x60/0x80 (F)
| dump_stack+0x18/0x28
| nmi_cpu_backtrace+0xfc/0x140
| nmi_trigger_cpumask_backtrace+0x1c8/0x200
| arch_trigger_cpumask_backtrace+0x20/0x40
| sysrq_handle_showallcpus+0x24/0x38 (F)
| __handle_sysrq+0xa8/0x1b0 (F)
| handle_sysrq+0x38/0x50 (F)
| pl011_int+0x460/0x5a8 (F)
| __handle_irq_event_percpu+0x60/0x220 (F)
| handle_irq_event+0x54/0xc0 (F)
| handle_fasteoi_irq+0xa8/0x1d0 (F)
| generic_handle_domain_irq+0x34/0x58 (F)
| gic_handle_irq+0x54/0x140 (FK)
| call_on_irq_stack+0x24/0x58 (F)
| do_interrupt_handler+0x88/0xa0
| el1_interrupt+0x34/0x68 (FK)
| el1h_64_irq_handler+0x18/0x28
| el1h_64_irq+0x64/0x68
| default_idle_call+0x34/0x180
| do_idle+0x204/0x268
| cpu_startup_entry+0x40/0x50 (F)
| rest_init+0xe4/0xf0
| start_kernel+0x744/0x750
| __primary_switched+0x80/0x90

Note that as these flags are reported next to the recovered PC value,
they appear on the callers of instrumented functions. For example
gic_handle_irq() has a "K" marker because generic_handle_domain_irq()
was instrumented with kretprobes and had its return address rewritten.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# bdf8eafb 17-Oct-2024 Mark Rutland <[email protected]>

arm64: stacktrace: report source of unwind data

When analysing a stacktrace it can be useful to know where an unwound PC
came from, as in some situations certain sources may be suspect or known
to b

arm64: stacktrace: report source of unwind data

When analysing a stacktrace it can be useful to know where an unwound PC
came from, as in some situations certain sources may be suspect or known
to be unreliable. In future it would also be useful to track this so
that certain unwind steps can be performed in a stateful manner. For
example when unwinding across an exception boundary, we'd ideally unwind
pt_regs::pc, then pt_regs::lr, then the next frame record.

This patch adds an enumerated set of unwind sources, tracks this during
the unwind, and updates dump_backtrace() to log these for interesting
unwind steps.

The interesting sources recorded are:

"C" - the PC came from the caller of an unwind function.
"T" - the PC came from thread_saved_pc() for a blocked task.
"P" - the PC came from a pt_regs::pc.
"U" - the PC came from an unknown source (indicates an unwinder error).

... with nothing recorded when the PC came from a frame_record::pc as
this is the vastly common case and logging this would make it difficult
to spot the more interesting cases.

For example, when triggering a backtrace via magic-sysrq + L, the CPU
handling the sysrq will have a backtrace whose first element is the
caller (C) of dump_backtrace():

| Call trace:
| show_stack+0x18/0x30 (C)
| dump_stack_lvl+0x60/0x80
| dump_stack+0x18/0x24
| nmi_cpu_backtrace+0xfc/0x140
| ...

... and other CPUs will have a backtrace whose first element is their
pt_regs::pc (P) at the instant the backtrace IPI was taken:

| Call trace:
| _raw_spin_unlock_irqrestore+0x8/0x50 (P)
| wake_up_process+0x18/0x24
| process_timeout+0x14/0x20
| call_timer_fn.isra.0+0x24/0x80
| ...

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# b7794795 17-Oct-2024 Mark Rutland <[email protected]>

arm64: stacktrace: move dump_backtrace() to kunwind_stack_walk()

Currently dump_backtrace() can only print the PC value at each step of
the unwind, as this is all the information that arch_stack_wal

arm64: stacktrace: move dump_backtrace() to kunwind_stack_walk()

Currently dump_backtrace() can only print the PC value at each step of
the unwind, as this is all the information that arch_stack_walk()
passes to the dump_backtrace_entry() callback.

In future we'd like to print some additional information, such as the
origin of entries (e.g. PC, LR, FP) and/or the reliability thereof.

In preparation for doing so, this patch moves dump_backtrace() over to
kunwind_stack_walk(), which passes the full kunwind_state to the
callback.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 886c2b0b 17-Oct-2024 Mark Rutland <[email protected]>

arm64: use a common struct frame_record

Currently the signal handling code has its own struct frame_record,
the definition of struct pt_regs open-codes a frame record as an array,
and the kernel unw

arm64: use a common struct frame_record

Currently the signal handling code has its own struct frame_record,
the definition of struct pt_regs open-codes a frame record as an array,
and the kernel unwinder hard-codes frame record offsets.

Move to a common struct frame_record that can be used throughout the
kernel.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5
# c060f932 18-Jun-2024 Puranjay Mohan <[email protected]>

arm64: stacktrace: fix the usage of ftrace_graph_ret_addr()

ftrace_graph_ret_addr() takes an 'idx' integer pointer that is used to
optimize the stack unwinding process. arm64 currently passes `NULL`

arm64: stacktrace: fix the usage of ftrace_graph_ret_addr()

ftrace_graph_ret_addr() takes an 'idx' integer pointer that is used to
optimize the stack unwinding process. arm64 currently passes `NULL` for
this parameter which stops it from utilizing these optimizations.

Further, the current code for ftrace_graph_ret_addr() will just return
the passed in return address if it is NULL which will break this usage.

Pass a valid integer pointer to ftrace_graph_ret_addr() similar to
x86_64's stack unwinder.

Signed-off-by: Puranjay Mohan <[email protected]>
Fixes: 29c1c24a2707 ("function_graph: Fix up ftrace_graph_ret_addr()")
Acked-by: Steven Rostedt (Google) <[email protected]>
Acked-by: Will Deacon <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7
# 410e471f 19-Dec-2023 chenqiwu <[email protected]>

arm64: Add USER_STACKTRACE support

Currently, userstacktrace is unsupported for ftrace and uprobe
tracers on arm64. This patch uses the perf_callchain_user() code
as blueprint to implement the arch_

arm64: Add USER_STACKTRACE support

Currently, userstacktrace is unsupported for ftrace and uprobe
tracers on arm64. This patch uses the perf_callchain_user() code
as blueprint to implement the arch_stack_walk_user() which add
userstacktrace support on arm64.
Meanwhile, we can use arch_stack_walk_user() to simplify the
implementation of perf_callchain_user().
This patch is tested pass with ftrace, uprobe and perf tracers
profiling userstacktrace cases.

Tested-by: chenqiwu <[email protected]>
Signed-off-by: chenqiwu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


# 2c79bd34 29-Feb-2024 Puranjay Mohan <[email protected]>

arm64: prohibit probing on arch_kunwind_consume_entry()

Make arch_kunwind_consume_entry() as __always_inline otherwise the
compiler might not inline it and allow attaching probes to it.

Without thi

arm64: prohibit probing on arch_kunwind_consume_entry()

Make arch_kunwind_consume_entry() as __always_inline otherwise the
compiler might not inline it and allow attaching probes to it.

Without this, just probing arch_kunwind_consume_entry() via
<tracefs>/kprobe_events will crash the kernel on arm64.

The crash can be reproduced using the following compiler and kernel
combination:
clang version 19.0.0git (https://github.com/llvm/llvm-project.git d68d29516102252f6bf6dc23fb22cef144ca1cb3)
commit 87adedeba51a ("Merge tag 'net-6.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net")

[root@localhost ~]# echo 'p arch_kunwind_consume_entry' > /sys/kernel/debug/tracing/kprobe_events
[root@localhost ~]# echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable

Modules linked in: aes_ce_blk aes_ce_cipher ghash_ce sha2_ce virtio_net sha256_arm64 sha1_ce arm_smccc_trng net_failover failover virtio_mmio uio_pdrv_genirq uio sch_fq_codel dm_mod dax configfs
CPU: 3 PID: 1405 Comm: bash Not tainted 6.8.0-rc6+ #14
Hardware name: linux,dummy-virt (DT)
pstate: 604003c5 (nZCv DAIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : kprobe_breakpoint_handler+0x17c/0x258
lr : kprobe_breakpoint_handler+0x17c/0x258
sp : ffff800085d6ab60
x29: ffff800085d6ab60 x28: ffff0000066f0040 x27: ffff0000066f0b20
x26: ffff800081fa7b0c x25: 0000000000000002 x24: ffff00000b29bd18
x23: ffff00007904c590 x22: ffff800081fa6590 x21: ffff800081fa6588
x20: ffff00000b29bd18 x19: ffff800085d6ac40 x18: 0000000000000079
x17: 0000000000000001 x16: ffffffffffffffff x15: 0000000000000004
x14: ffff80008277a940 x13: 0000000000000003 x12: 0000000000000003
x11: 00000000fffeffff x10: c0000000fffeffff x9 : aa95616fdf80cc00
x8 : aa95616fdf80cc00 x7 : 205d343137373231 x6 : ffff800080fb48ec
x5 : 0000000000000000 x4 : 0000000000000001 x3 : 0000000000000000
x2 : 0000000000000000 x1 : ffff800085d6a910 x0 : 0000000000000079
Call trace:
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_kunwind_consume_entry, .offset = 0, .addr = arch_kunwind_consume_entry+0x0/0x40
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_kunwind_consume_entry, .offset = 0, .addr = arch_kunwind_consume_entry+0x0/0x40

Fixes: 1aba06e7b2b4 ("arm64: stacktrace: factor out kunwind_stack_walk()")
Signed-off-by: Puranjay Mohan <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


# e74cb1b4 01-Feb-2024 Puranjay Mohan <[email protected]>

arm64: stacktrace: Implement arch_bpf_stack_walk() for the BPF JIT

This will be used by bpf_throw() to unwind till the program marked as
exception boundary and run the callback with the stack of the

arm64: stacktrace: Implement arch_bpf_stack_walk() for the BPF JIT

This will be used by bpf_throw() to unwind till the program marked as
exception boundary and run the callback with the stack of the main
program.

This is required for supporting BPF exceptions on ARM64.

Signed-off-by: Puranjay Mohan <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>

show more ...


Revision tags: v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3
# 1aba06e7 24-Nov-2023 Mark Rutland <[email protected]>

arm64: stacktrace: factor out kunwind_stack_walk()

Currently arm64 uses the generic arch_stack_walk() interface for all
stack walking code. This only passes a PC value and cookie to the unwind
callb

arm64: stacktrace: factor out kunwind_stack_walk()

Currently arm64 uses the generic arch_stack_walk() interface for all
stack walking code. This only passes a PC value and cookie to the unwind
callback, whereas we'd like to pass some additional information in some
cases. For example, the BPF exception unwinder wants the FP, for
reliable stacktrace we'll want to perform additional checks on other
portions of unwind state, and we'd like to expand the information
printed by dump_backtrace() to include provenance and reliability
information.

As preparation for all of the above, this patch factors the core unwind
logic out of arch_stack_walk() and into a new kunwind_stack_walk()
function which provides all of the unwind state to a callback function.
The existing arch_stack_walk() interface is implemented atop this.

The kunwind_stack_walk() function is intended to be a private
implementation detail of unwinders in stacktrace.c, and not something to
be exported generally to kernel code. It is __always_inline'd into its
caller so that neither it or its caller appear in stactraces (which is
the existing/required behavior for arch_stack_walk() and friends) and so
that the compiler can optimize away some of the indirection.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Puranjay Mohan <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


# 1beef60e 24-Nov-2023 Mark Rutland <[email protected]>

arm64: stacktrace: factor out kernel unwind state

On arm64 we share some unwinding code between the regular kernel
unwinder and the KVM hyp unwinder. Some of this common code only matters
to the reg

arm64: stacktrace: factor out kernel unwind state

On arm64 we share some unwinding code between the regular kernel
unwinder and the KVM hyp unwinder. Some of this common code only matters
to the regular unwinder, e.g. the `kr_cur` and `task` fields in the
common struct unwind_state.

We're likely to add more state which only matters for regular kernel
unwinding (or only for hyp unwinding). In preparation for such changes,
this patch factors out the kernel-specific state into a new struct
kunwind_state, and updates the kernel unwind code accordingly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Kalesh Singh <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Puranjay Mohan <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Puranjay Mohan <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7
# ca708599 12-Apr-2023 Mark Rutland <[email protected]>

arm64: use XPACLRI to strip PAC

Currently we strip the PAC from pointers using C code, which requires
generating bitmasks, and conditionally clearing/setting bits depending
on bit 55. We can do bett

arm64: use XPACLRI to strip PAC

Currently we strip the PAC from pointers using C code, which requires
generating bitmasks, and conditionally clearing/setting bits depending
on bit 55. We can do better by using XPACLRI directly.

When the logic was originally written to strip PACs from user pointers,
contemporary toolchains used for the kernel had assemblers which were
unaware of the PAC instructions. As stripping the PAC from userspace
pointers required unconditional clearing of a fixed set of bits (which
could be performed with a single instruction), it was simpler to
implement the masking in C than it was to make use of XPACI or XPACLRI.

When support for in-kernel pointer authentication was added, the
stripping logic was extended to cover TTBR1 pointers, requiring several
instructions to handle whether to clear/set bits dependent on bit 55 of
the pointer.

This patch simplifies the stripping of PACs by using XPACLRI directly,
as contemporary toolchains do within __builtin_return_address(). This
saves a number of instructions, especially where
__builtin_return_address() does not implicitly strip the PAC but is
heavily used (e.g. with tracepoints). As the kernel might be compiled
with an assembler without knowledge of XPACLRI, it is assembled using
the 'HINT #7' alias, which results in an identical opcode.

At the same time, I've split ptrauth_strip_insn_pac() into
ptrauth_strip_user_insn_pac() and ptrauth_strip_kernel_insn_pac()
helpers so that we can avoid unnecessary PAC stripping when pointer
authentication is not in use in userspace or kernel respectively.

The underlying xpaclri() macro uses inline assembly which clobbers x30.
The clobber causes the compiler to save/restore the original x30 value
in a frame record (protected with PACIASP and AUTIASP when in-kernel
authentication is enabled), so this does not provide a gadget to alter
the return address. Similarly this does not adversely affect unwinding
due to the presence of the frame record.

The ptrauth_user_pac_mask() and ptrauth_kernel_pac_mask() are exported
from the kernel in ptrace and core dumps, so these are retained. A
subsequent patch will move them out of <asm/compiler.h>.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Amit Daniel Kachhap <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: James Morse <[email protected]>
Cc: Kristina Martsenko <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


# b5ecc19e 11-Apr-2023 Mark Rutland <[email protected]>

arm64: stacktrace: always inline core stacktrace functions

The arm64 stacktrace code can be used in kprobe context, and so cannot
be safely probed. Some (but not all) of the unwind functions are
ann

arm64: stacktrace: always inline core stacktrace functions

The arm64 stacktrace code can be used in kprobe context, and so cannot
be safely probed. Some (but not all) of the unwind functions are
annotated with `NOKPROBE_SYMBOL()` to ensure this, with others markes as
`__always_inline`, relying on the top-level unwind function being marked
as `noinstr`.

This patch has stacktrace.c consistently mark the internal stacktrace
functions as `__always_inline`, removing the need for NOKPROBE_SYMBOL()
as the top-level unwind function (arch_stack_walk()) is marked as
`noinstr`. This is more consistent and is a simpler pattern to follow
for future additions to stacktrace.c.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


# ead6122c 11-Apr-2023 Mark Rutland <[email protected]>

arm64: stacktrace: move dump functions to end of file

For historical reasons, the backtrace dumping functions are placed in
the middle of stacktrace.c, despite using functions defined later. For
cla

arm64: stacktrace: move dump functions to end of file

For historical reasons, the backtrace dumping functions are placed in
the middle of stacktrace.c, despite using functions defined later. For
clarity, and to make subsequent refactoring easier, move the dumping
functions to the end of stacktrace.c

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


# 9e09d445 11-Apr-2023 Mark Rutland <[email protected]>

arm64: stacktrace: recover return address for first entry

The function which calls the top-level backtracing function may have
been instrumented with ftrace and/or kprobes, and hence the first retur

arm64: stacktrace: recover return address for first entry

The function which calls the top-level backtracing function may have
been instrumented with ftrace and/or kprobes, and hence the first return
address may have been rewritten.

Factor out the existing fgraph / kretprobes address recovery, and use
this for the first address. As the comment for the fgraph case isn't all
that helpful, I've also dropped that.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Madhavan T. Venkataraman <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1
# 7ea55715 09-Dec-2022 Ard Biesheuvel <[email protected]>

arm64: efi: Account for the EFI runtime stack in stack unwinder

The EFI runtime services run from a dedicated stack now, and so the
stack unwinder needs to be informed about this.

Acked-by: Mark Ru

arm64: efi: Account for the EFI runtime stack in stack unwinder

The EFI runtime services run from a dedicated stack now, and so the
stack unwinder needs to be informed about this.

Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>

show more ...


Revision tags: v6.1-rc8
# 0fbcd8ab 02-Dec-2022 Masami Hiramatsu (Google) <[email protected]>

arm64: Prohibit instrumentation on arch_stack_walk()

Mark arch_stack_walk() as noinstr instead of notrace and inline functions
called from arch_stack_walk() as __always_inline so that user does not

arm64: Prohibit instrumentation on arch_stack_walk()

Mark arch_stack_walk() as noinstr instead of notrace and inline functions
called from arch_stack_walk() as __always_inline so that user does not
put any instrumentations on it, because this function can be used from
return_address() which is used by lockdep.

Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing
arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on
arm64.

# echo p arch_stack_walk >> ${TRACEFS}/kprobe_events
# echo 1 > ${TRACEFS}/events/kprobes/enable
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
PREEMPT SMP
Modules linked in:
CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6
Hardware name: linux,dummy-virt (DT)
Stopper: 0x0 <- 0x0
pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : kprobe_breakpoint_handler+0x178/0x17c
lr : kprobe_breakpoint_handler+0x178/0x17c
sp : ffff8000080d3090
x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774
x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000
x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0
x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006
x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d
x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b
x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958
x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000
x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064
Call trace:
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!

Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()")
Cc: [email protected]
Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/166994751368.439920.3236636557520824664.stgit@devnote3
Signed-off-by: Will Deacon <[email protected]>

show more ...


Revision tags: v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4
# 4b5e694e 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: track hyp stacks in unwinder's address space

Currently unwind_next_frame_record() has an optional callback to convert
the address space of the FP. This is necessary for the NVHE u

arm64: stacktrace: track hyp stacks in unwinder's address space

Currently unwind_next_frame_record() has an optional callback to convert
the address space of the FP. This is necessary for the NVHE unwinder,
which tracks the stacks in the hyp VA space, but accesses the frame
records in the kernel VA space.

This is a bit unfortunate since it clutters unwind_next_frame_record(),
which will get in the way of future rework.

Instead, this patch changes the NVHE unwinder to track the stacks in the
kernel's VA space and translate to FP prior to calling
unwind_next_frame_record(). This removes the need for the translate_fp()
callback, as all unwinders consistently track stacks in the native
address space of the unwinder.

At the same time, this patch consolidates the generation of the stack
addresses behind the stackinfo_get_*() helpers.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 8df13730 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: track all stack boundaries explicitly

Currently we call an on_accessible_stack() callback for each step of the
unwinder, requiring redundant work to be performed in the core of th

arm64: stacktrace: track all stack boundaries explicitly

Currently we call an on_accessible_stack() callback for each step of the
unwinder, requiring redundant work to be performed in the core of the
unwind loop (e.g. disabling preemption around accesses to per-cpu
variables containing stack boundaries). To prevent unwind loops which go
through a stack multiple times, we have to track the set of unwound
stacks, requiring a stack_type enum which needs to cater for all the
stacks of all possible callees. To prevent loops within a stack, we must
track the prior FP values.

This patch reworks the unwinder to minimize the work in the core of the
unwinder, and to remove the need for the stack_type enum. The set of
accessible stacks (and their boundaries) are determined at the start of
the unwind, and the current stack is tracked during the unwind, with
completed stacks removed from the set of accessible stacks. This makes
the boundary checks more accurate (e.g. detecting overlapped frame
records), and removes the need for separate tracking of the prior FP and
visited stacks.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# d1f684e4 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: rework stack boundary discovery

In subsequent patches we'll want to acquire the stack boundaries
ahead-of-time, and we'll need to be able to acquire the relevant
stack_info regard

arm64: stacktrace: rework stack boundary discovery

In subsequent patches we'll want to acquire the stack boundaries
ahead-of-time, and we'll need to be able to acquire the relevant
stack_info regardless of whether we have an object the happens to be on
the stack.

This patch replaces the on_XXX_stack() helpers with stackinfo_get_XXX()
helpers, with the caller being responsible for the checking whether an
object is on a relevant stack. For the moment this is moved into the
on_accessible_stack() functions, making these slightly larger;
subsequent patches will remove the on_accessible_stack() functions and
simplify the logic.

The on_irq_stack() and on_task_stack() helpers are kept as these are
used by IRQ entry sequences and stackleak respectively. As they're only
used as predicates, the stack_info pointer parameter is removed in both
cases.

As the on_accessible_stack() functions are always passed a non-NULL info
pointer, these now update info unconditionally. When updating the type
to STACK_TYPE_UNKNOWN, the low/high bounds are also modified, but as
these will not be consumed this should have no adverse affect.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# 75758d51 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: move SDEI stack helpers to stacktrace code

For clarity and ease of maintenance, it would be helpful for all the
stack helpers to be in the same place.

Move the SDEI stack helpers

arm64: stacktrace: move SDEI stack helpers to stacktrace code

For clarity and ease of maintenance, it would be helpful for all the
stack helpers to be in the same place.

Move the SDEI stack helpers into the stacktrace code where all the other
stack helpers live.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: James Morse <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# b532ab5f 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()

The unwind_next_common() function unwinds a single frame record. There
are other unwind steps (e.g. unwinding through tra

arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()

The unwind_next_common() function unwinds a single frame record. There
are other unwind steps (e.g. unwinding through trampolines) which are
handled in the regular kernel unwinder, and in future there may be other
common unwind helpers.

Clarify the purpose of unwind_next_common() by renaming it to
unwind_next_frame_record(). At the same time, add commentary, and delete
the redundant comment at the top of asm/stacktrace/common.h.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# bc8d7521 01-Sep-2022 Mark Rutland <[email protected]>

arm64: stacktrace: simplify unwind_next_common()

Currently unwind_next_common() takes a pointer to a stack_info which is
only ever used within unwind_next_common().

Make it a local variable and sim

arm64: stacktrace: simplify unwind_next_common()

Currently unwind_next_common() takes a pointer to a stack_info which is
only ever used within unwind_next_common().

Make it a local variable and simplify callers.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kalesh Singh <[email protected]>
Reviewed-by: Madhavan T. Venkataraman <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Cc: Fuad Tabba <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


12345