History log of /linux-6.15/arch/riscv/kernel/entry.S (Results 1 – 25 of 65)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6
# 5cd900b8 03-Jan-2025 Clément Léger <[email protected]>

riscv: use local label names instead of global ones in assembly

Local labels should be prefix by '.L' or they'll be exported in the
symbol table. Additionally, this messes up the backtrace by displa

riscv: use local label names instead of global ones in assembly

Local labels should be prefix by '.L' or they'll be exported in the
symbol table. Additionally, this messes up the backtrace by displaying
an incorrect symbol:

...
[ 12.751810] [<ffffffff80441628>] _copy_from_user+0x28/0xc2
[ 12.752035] [<ffffffff800152ca>] handle_misaligned_load+0x1ca/0x2fc
[ 12.752310] [<ffffffff80a033e8>] do_trap_load_misaligned+0x24/0xee
[ 12.752596] [<ffffffff80a0dcae>] _new_vmalloc_restore_context_a0+0xc2/0xce

After:
...
[ 10.243916] [<ffffffff804415e4>] _copy_from_user+0x28/0xc2
[ 10.244026] [<ffffffff800152ca>] handle_misaligned_load+0x1ca/0x2fc
[ 10.244150] [<ffffffff80a033a0>] do_trap_load_misaligned+0x24/0xee
[ 10.244268] [<ffffffff80a0dc66>] handle_exception+0x146/0x152

Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Alexandre Ghiti <[email protected]>
Fixes: 503638e0babf3 ("riscv: Stop emitting preventive sfence.vma for new vmalloc mappings")
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.13-rc5, v6.13-rc4, v6.13-rc3
# 51356ce6 09-Dec-2024 Clément Léger <[email protected]>

riscv: stacktrace: fix backtracing through exceptions

Prior to commit 5d5fc33ce58e ("riscv: Improve exception and system call
latency"), backtrace through exception worked since ra was filled with
r

riscv: stacktrace: fix backtracing through exceptions

Prior to commit 5d5fc33ce58e ("riscv: Improve exception and system call
latency"), backtrace through exception worked since ra was filled with
ret_from_exception symbol address and the stacktrace code checked 'pc' to
be equal to that symbol. Now that handle_exception uses regular 'call'
instructions, this isn't working anymore and backtrace stops at
handle_exception(). Since there are multiple call site to C code in the
exception handling path, rather than checking multiple potential return
addresses, add a new symbol at the end of exception handling and check pc
to be in that range.

Fixes: 5d5fc33ce58e ("riscv: Improve exception and system call latency")
Signed-off-by: Clément Léger <[email protected]>
Tested-by: Alexandre Ghiti <[email protected]>
Reviewed-by: Alexandre Ghiti <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1
# 8f1534e7 20-Jul-2024 Jisheng Zhang <[email protected]>

riscv: avoid Imbalance in RAS

Inspired by[1], modify the code to remove the code of modifying ra to
avoid imbalance RAS (return address stack) which may lead to incorret
predictions on return.

Link

riscv: avoid Imbalance in RAS

Inspired by[1], modify the code to remove the code of modifying ra to
avoid imbalance RAS (return address stack) which may lead to incorret
predictions on return.

Link: https://lore.kernel.org/linux-riscv/[email protected]/ [1]
Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Cyril Bur <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# 503638e0 17-Jul-2024 Alexandre Ghiti <[email protected]>

riscv: Stop emitting preventive sfence.vma for new vmalloc mappings

In 6.5, we removed the vmalloc fault path because that can't work (see
[1] [2]). Then in order to make sure that new page table en

riscv: Stop emitting preventive sfence.vma for new vmalloc mappings

In 6.5, we removed the vmalloc fault path because that can't work (see
[1] [2]). Then in order to make sure that new page table entries were
seen by the page table walker, we had to preventively emit a sfence.vma
on all harts [3] but this solution is very costly since it relies on IPI.

And even there, we could end up in a loop of vmalloc faults if a vmalloc
allocation is done in the IPI path (for example if it is traced, see
[4]), which could result in a kernel stack overflow.

Those preventive sfence.vma needed to be emitted because:

- if the uarch caches invalid entries, the new mapping may not be
observed by the page table walker and an invalidation may be needed.
- if the uarch does not cache invalid entries, a reordered access
could "miss" the new mapping and traps: in that case, we would actually
only need to retry the access, no sfence.vma is required.

So this patch removes those preventive sfence.vma and actually handles
the possible (and unlikely) exceptions. And since the kernel stacks
mappings lie in the vmalloc area, this handling must be done very early
when the trap is taken, at the very beginning of handle_exception: this
also rules out the vmalloc allocations in the fault path.

Link: https://lore.kernel.org/linux-riscv/[email protected]/ [1]
Link: https://lore.kernel.org/linux-riscv/[email protected] [2]
Link: https://lore.kernel.org/linux-riscv/[email protected]/ [3]
Link: https://lore.kernel.org/lkml/[email protected]/ [4]
Signed-off-by: Alexandre Ghiti <[email protected]>
Reviewed-by: Yunhui Cui <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.10, v6.10-rc7, v6.10-rc6
# b5db73fb 23-Jun-2024 Jisheng Zhang <[email protected]>

riscv: enable HAVE_ARCH_STACKLEAK

Add support for the stackleak feature. Whenever the kernel returns to user
space the kernel stack is filled with a poison value.

At the same time, disables the plu

riscv: enable HAVE_ARCH_STACKLEAK

Add support for the stackleak feature. Whenever the kernel returns to user
space the kernel stack is filled with a poison value.

At the same time, disables the plugin in EFI stub code because EFI stub
is out of scope for the protection.

Tested on qemu and milkv duo:
/ # echo STACKLEAK_ERASING > /sys/kernel/debug/provoke-crash/DIRECT
[ 38.675575] lkdtm: Performing direct entry STACKLEAK_ERASING
[ 38.678448] lkdtm: stackleak stack usage:
[ 38.678448] high offset: 288 bytes
[ 38.678448] current: 496 bytes
[ 38.678448] lowest: 1328 bytes
[ 38.678448] tracked: 1328 bytes
[ 38.678448] untracked: 448 bytes
[ 38.678448] poisoned: 14312 bytes
[ 38.678448] low offset: 8 bytes
[ 38.689887] lkdtm: OK: the rest of the thread stack is properly erased

Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Charlie Jenkins <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.10-rc5, v6.10-rc4, v6.10-rc3
# 5d5fc33c 07-Jun-2024 Anton Blanchard <[email protected]>

riscv: Improve exception and system call latency

Many CPUs implement return address branch prediction as a stack. The
RISCV architecture refers to this as a return address stack (RAS). If
this gets

riscv: Improve exception and system call latency

Many CPUs implement return address branch prediction as a stack. The
RISCV architecture refers to this as a return address stack (RAS). If
this gets corrupted then the CPU will mispredict at least one but
potentally many function returns.

There are two issues with the current RISCV exception code:

- We are using the alternate link stack (x5/t0) for the indirect branch
which makes the hardware think this is a function return. This will
corrupt the RAS.

- We modify the return address of handle_exception to point to
ret_from_exception. This will also corrupt the RAS.

Testing the null system call latency before and after the patch:

Visionfive2 (StarFive JH7110 / U74)
baseline: 189.87 ns
patched: 176.76 ns

Lichee pi 4a (T-Head TH1520 / C910)
baseline: 666.58 ns
patched: 636.90 ns

Just over 7% on the U74 and just over 4% on the C910.

Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Cyril Bur <[email protected]>
Tested-by: Jisheng Zhang <[email protected]>
Reviewed-by: Jisheng Zhang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5
# 5014396a 04-Oct-2023 Clément Léger <[email protected]>

riscv: blacklist assembly symbols for kprobe

Adding kprobes on some assembly functions (mainly exception handling)
will result in crashes (either recursive trap or panic). To avoid such
errors, add

riscv: blacklist assembly symbols for kprobe

Adding kprobes on some assembly functions (mainly exception handling)
will result in crashes (either recursive trap or panic). To avoid such
errors, add ASM_NOKPROBE() macro which allow adding specific symbols
into the __kprobe_blacklist section and use to blacklist the following
symbols that showed to be problematic:
- handle_exception()
- ret_from_exception()
- handle_kernel_stack_overflow()

Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Charlie Jenkins <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# 2080ff94 15-Jan-2024 Andy Chiu <[email protected]>

riscv: vector: allow kernel-mode Vector with preemption

Add kernel_vstate to keep track of kernel-mode Vector registers when
trap introduced context switch happens. Also, provide riscv_v_flags to
le

riscv: vector: allow kernel-mode Vector with preemption

Add kernel_vstate to keep track of kernel-mode Vector registers when
trap introduced context switch happens. Also, provide riscv_v_flags to
let context save/restore routine track context status. Context tracking
happens whenever the core starts its in-kernel Vector executions. An
active (dirty) kernel task's V contexts will be saved to memory whenever
a trap-introduced context switch happens. Or, when a softirq, which
happens to nest on top of it, uses Vector. Context retoring happens when
the execution transfer back to the original Kernel context where it
first enable preempt_v.

Also, provide a config CONFIG_RISCV_ISA_V_PREEMPTIVE to give users an
option to disable preemptible kernel-mode Vector at build time. Users
with constraint memory may want to disable this config as preemptible
kernel-mode Vector needs extra space for tracking of per thread's
kernel-mode V context. Or, users might as well want to disable it if all
kernel-mode Vector code is time sensitive and cannot tolerate context
switch overhead.

Signed-off-by: Andy Chiu <[email protected]>
Tested-by: Björn Töpel <[email protected]>
Tested-by: Lad Prabhakar <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# 4cc0d8a3 24-Oct-2023 Clément Léger <[email protected]>

riscv: kernel: Use correct SYM_DATA_*() macro for data

Some data were incorrectly annotated with SYM_FUNC_*() instead of
SYM_DATA_*() ones. Use the correct ones.

Signed-off-by: Clément Léger <clege

riscv: kernel: Use correct SYM_DATA_*() macro for data

Some data were incorrectly annotated with SYM_FUNC_*() instead of
SYM_DATA_*() ones. Use the correct ones.

Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Andrew Jones <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# b18f7296 24-Oct-2023 Clément Léger <[email protected]>

riscv: use ".L" local labels in assembly when applicable

For the sake of coherency, use local labels in assembly when
applicable. This also avoid kprobes being confused when applying a
kprobe since

riscv: use ".L" local labels in assembly when applicable

For the sake of coherency, use local labels in assembly when
applicable. This also avoid kprobes being confused when applying a
kprobe since the size of function is computed by checking where the
next visible symbol is located. This might end up in computing some
function size to be way shorter than expected and thus failing to apply
kprobes to the specified offset.

Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Andrew Jones <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5
# 87615e95 21-Aug-2023 Nam Cao <[email protected]>

riscv: put interrupt entries into .irqentry.text

The interrupt entries are expected to be in the .irqentry.text section.
For example, for kprobes to work properly, exception code cannot be
probed; t

riscv: put interrupt entries into .irqentry.text

The interrupt entries are expected to be in the .irqentry.text section.
For example, for kprobes to work properly, exception code cannot be
probed; this is ensured by blacklisting addresses in the .irqentry.text
section.

Fixes: 7db91e57a0ac ("RISC-V: Task implementation")
Signed-off-by: Nam Cao <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Cc: [email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# c40fef85 27-Sep-2023 Sami Tolvanen <[email protected]>

riscv: Use separate IRQ shadow call stacks

When both CONFIG_IRQ_STACKS and SCS are enabled, also use a separate
per-CPU shadow call stack.

Signed-off-by: Sami Tolvanen <[email protected]>
Tes

riscv: Use separate IRQ shadow call stacks

When both CONFIG_IRQ_STACKS and SCS are enabled, also use a separate
per-CPU shadow call stack.

Signed-off-by: Sami Tolvanen <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# d1584d79 27-Sep-2023 Sami Tolvanen <[email protected]>

riscv: Implement Shadow Call Stack

Implement CONFIG_SHADOW_CALL_STACK for RISC-V. When enabled, the
compiler injects instructions to all non-leaf C functions to
store the return address to the shado

riscv: Implement Shadow Call Stack

Implement CONFIG_SHADOW_CALL_STACK for RISC-V. When enabled, the
compiler injects instructions to all non-leaf C functions to
store the return address to the shadow stack and unconditionally
load it again before returning, which makes it harder to corrupt
the return address through a stack overflow, for example.

The active shadow call stack pointer is stored in the gp
register, which makes SCS incompatible with gp relaxation. Use
--no-relax-gp to ensure gp relaxation is disabled and disable
global pointer loading. Add SCS pointers to struct thread_info,
implement SCS initialization, and task switching

Signed-off-by: Sami Tolvanen <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# e609b4f4 27-Sep-2023 Sami Tolvanen <[email protected]>

riscv: Move global pointer loading to a macro

In Clang 17, -fsanitize=shadow-call-stack uses the newly declared
platform register gp for storing shadow call stack pointers. As
this is obviously inco

riscv: Move global pointer loading to a macro

In Clang 17, -fsanitize=shadow-call-stack uses the newly declared
platform register gp for storing shadow call stack pointers. As
this is obviously incompatible with gp relaxation, in preparation
for CONFIG_SHADOW_CALL_STACK support, move global pointer loading
to a single macro, which we can cleanly disable when SCS is used
instead.

Link: https://reviews.llvm.org/rGaa1d2693c256
Link: https://github.com/riscv-non-isa/riscv-elf-psabi-doc/commit/a484e843e6eeb51f0cb7b8819e50da6d2444d769
Signed-off-by: Sami Tolvanen <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# 82982fdd 27-Sep-2023 Sami Tolvanen <[email protected]>

riscv: Deduplicate IRQ stack switching

With CONFIG_IRQ_STACKS, we switch to a separate per-CPU IRQ stack
before calling handle_riscv_irq or __do_softirq. We currently
have duplicate inline assembly

riscv: Deduplicate IRQ stack switching

With CONFIG_IRQ_STACKS, we switch to a separate per-CPU IRQ stack
before calling handle_riscv_irq or __do_softirq. We currently
have duplicate inline assembly snippets for stack switching in
both code paths. Now that we can access per-CPU variables in
assembly, implement call_on_irq_stack in assembly, and use that
instead of redundant inline assembly.

Signed-off-by: Sami Tolvanen <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Reviewed-by: Guo Ren <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# be97d0db 27-Sep-2023 Deepak Gupta <[email protected]>

riscv: VMAP_STACK overflow detection thread-safe

commit 31da94c25aea ("riscv: add VMAP_STACK overflow detection") added
support for CONFIG_VMAP_STACK. If overflow is detected, CPU switches to
`shado

riscv: VMAP_STACK overflow detection thread-safe

commit 31da94c25aea ("riscv: add VMAP_STACK overflow detection") added
support for CONFIG_VMAP_STACK. If overflow is detected, CPU switches to
`shadow_stack` temporarily before switching finally to per-cpu
`overflow_stack`.

If two CPUs/harts are racing and end up in over flowing kernel stack, one
or both will end up corrupting each other state because `shadow_stack` is
not per-cpu. This patch optimizes per-cpu overflow stack switch by
directly picking per-cpu `overflow_stack` and gets rid of `shadow_stack`.

Following are the changes in this patch

- Defines an asm macro to obtain per-cpu symbols in destination
register.
- In entry.S, when overflow is detected, per-cpu overflow stack is
located using per-cpu asm macro. Computing per-cpu symbol requires
a temporary register. x31 is saved away into CSR_SCRATCH
(CSR_SCRATCH is anyways zero since we're in kernel).

Please see Links for additional relevant disccussion and alternative
solution.

Tested by `echo EXHAUST_STACK > /sys/kernel/debug/provoke-crash/DIRECT`
Kernel crash log below

Insufficient stack space to handle exception!/debug/provoke-crash/DIRECT
Task stack: [0xff20000010a98000..0xff20000010a9c000]
Overflow stack: [0xff600001f7d98370..0xff600001f7d99370]
CPU: 1 PID: 205 Comm: bash Not tainted 6.1.0-rc2-00001-g328a1f96f7b9 #34
Hardware name: riscv-virtio,qemu (DT)
epc : __memset+0x60/0xfc
ra : recursive_loop+0x48/0xc6 [lkdtm]
epc : ffffffff808de0e4 ra : ffffffff0163a752 sp : ff20000010a97e80
gp : ffffffff815c0330 tp : ff600000820ea280 t0 : ff20000010a97e88
t1 : 000000000000002e t2 : 3233206874706564 s0 : ff20000010a982b0
s1 : 0000000000000012 a0 : ff20000010a97e88 a1 : 0000000000000000
a2 : 0000000000000400 a3 : ff20000010a98288 a4 : 0000000000000000
a5 : 0000000000000000 a6 : fffffffffffe43f0 a7 : 00007fffffffffff
s2 : ff20000010a97e88 s3 : ffffffff01644680 s4 : ff20000010a9be90
s5 : ff600000842ba6c0 s6 : 00aaaaaac29e42b0 s7 : 00fffffff0aa3684
s8 : 00aaaaaac2978040 s9 : 0000000000000065 s10: 00ffffff8a7cad10
s11: 00ffffff8a76a4e0 t3 : ffffffff815dbaf4 t4 : ffffffff815dbaf4
t5 : ffffffff815dbab8 t6 : ff20000010a9bb48
status: 0000000200000120 badaddr: ff20000010a97e88 cause: 000000000000000f
Kernel panic - not syncing: Kernel stack overflow
CPU: 1 PID: 205 Comm: bash Not tainted 6.1.0-rc2-00001-g328a1f96f7b9 #34
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff80006754>] dump_backtrace+0x30/0x38
[<ffffffff808de798>] show_stack+0x40/0x4c
[<ffffffff808ea2a8>] dump_stack_lvl+0x44/0x5c
[<ffffffff808ea2d8>] dump_stack+0x18/0x20
[<ffffffff808dec06>] panic+0x126/0x2fe
[<ffffffff800065ea>] walk_stackframe+0x0/0xf0
[<ffffffff0163a752>] recursive_loop+0x48/0xc6 [lkdtm]
SMP: stopping secondary CPUs
---[ end Kernel panic - not syncing: Kernel stack overflow ]---

Cc: Guo Ren <[email protected]>
Cc: Jisheng Zhang <[email protected]>
Link: https://lore.kernel.org/linux-riscv/Y347B0x4VUNOd6V7@xhacker/T/#t
Link: https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Deepak Gupta <[email protected]>
Co-developed-by: Sami Tolvanen <[email protected]>
Signed-off-by: Sami Tolvanen <[email protected]>
Acked-by: Guo Ren <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1
# 4681daca 23-Apr-2023 Fangrui Song <[email protected]>

riscv: replace deprecated scall with ecall

scall is a deprecated alias for ecall. ecall is used in several places,
so there is no assembler compatibility concern.

Signed-off-by: Fangrui Song <maskr

riscv: replace deprecated scall with ecall

scall is a deprecated alias for ecall. ecall is used in several places,
so there is no assembler compatibility concern.

Signed-off-by: Fangrui Song <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# 74abe5a3 05-Jun-2023 Guo Ren <[email protected]>

riscv: Disable Vector Instructions for kernel itself

Disable vector instructions execution for kernel mode at its entrances.
This helps find illegal uses of vector in the kernel space, which is
simi

riscv: Disable Vector Instructions for kernel itself

Disable vector instructions execution for kernel mode at its entrances.
This helps find illegal uses of vector in the kernel space, which is
similar to the fpu.

Signed-off-by: Guo Ren <[email protected]>
Co-developed-by: Vincent Chen <[email protected]>
Signed-off-by: Vincent Chen <[email protected]>
Co-developed-by: Han-Kuan Chen <[email protected]>
Signed-off-by: Han-Kuan Chen <[email protected]>
Co-developed-by: Greentime Hu <[email protected]>
Signed-off-by: Greentime Hu <[email protected]>
Signed-off-by: Vineet Gupta <[email protected]>
Signed-off-by: Andy Chiu <[email protected]>
Reviewed-by: Conor Dooley <[email protected]>
Reviewed-by: Heiko Stuebner <[email protected]>
Tested-by: Heiko Stuebner <[email protected]>
Reviewed-by: Palmer Dabbelt <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1
# 45b32b94 22-Feb-2023 Jisheng Zhang <[email protected]>

riscv: entry: Consolidate general regs saving/restoring

Consolidate the saving/restoring GPs (except zero, ra, sp, gp,
tp and t0) into save_from_x6_to_x31/restore_from_x6_to_x31 macros.

No function

riscv: entry: Consolidate general regs saving/restoring

Consolidate the saving/restoring GPs (except zero, ra, sp, gp,
tp and t0) into save_from_x6_to_x31/restore_from_x6_to_x31 macros.

No functional change intended.

Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Guo Ren <[email protected]>
Reviewed-by: Björn Töpel <[email protected]>
Tested-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# ab9164da 22-Feb-2023 Jisheng Zhang <[email protected]>

riscv: entry: Consolidate ret_from_kernel_thread into ret_from_fork

The ret_from_kernel_thread() behaves similarly with ret_from_fork(),
the only difference is whether call the fn(arg) or not, this

riscv: entry: Consolidate ret_from_kernel_thread into ret_from_fork

The ret_from_kernel_thread() behaves similarly with ret_from_fork(),
the only difference is whether call the fn(arg) or not, this can be
achieved by testing fn is NULL or not, I.E s0 is 0 or not. Many
architectures have done the same thing, it makes entry.S more clean.

Signed-off-by: Jisheng Zhang <[email protected]>
Reviewed-by: Björn Töpel <[email protected]>
Reviewed-by: Guo Ren <[email protected]>
Tested-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# f0bddf50 22-Feb-2023 Guo Ren <[email protected]>

riscv: entry: Convert to generic entry

This patch converts riscv to use the generic entry infrastructure from
kernel/entry/*. The generic entry makes maintainers' work easier and
codes more elegant.

riscv: entry: Convert to generic entry

This patch converts riscv to use the generic entry infrastructure from
kernel/entry/*. The generic entry makes maintainers' work easier and
codes more elegant. Here are the changes:

- More clear entry.S with handle_exception and ret_from_exception
- Get rid of complex custom signal implementation
- Move syscall procedure from assembly to C, which is much more
readable.
- Connect ret_from_fork & ret_from_kernel_thread to generic entry.
- Wrap with irqentry_enter/exit and syscall_enter/exit_from_user_mode
- Use the standard preemption code instead of custom

Suggested-by: Huacai Chen <[email protected]>
Reviewed-by: Björn Töpel <[email protected]>
Tested-by: Yipeng Zou <[email protected]>
Tested-by: Jisheng Zhang <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Cc: Ben Hutchings <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5
# b0f4c74e 11-Nov-2022 Andrew Bresticker <[email protected]>

RISC-V: Fix unannoted hardirqs-on in return to userspace slow-path

The return to userspace path in entry.S may enable interrupts without the
corresponding lockdep annotation, producing a splat[0] wh

RISC-V: Fix unannoted hardirqs-on in return to userspace slow-path

The return to userspace path in entry.S may enable interrupts without the
corresponding lockdep annotation, producing a splat[0] when DEBUG_LOCKDEP
is enabled. Simply calling __trace_hardirqs_on() here gets a bit messy
due to the use of RA to point back to ret_from_exception, so just move
the whole slow-path loop into C. It's more readable and it lets us use
local_irq_{enable,disable}(), avoiding the need for manual annotations
altogether.

[0]:
------------[ cut here ]------------
DEBUG_LOCKS_WARN_ON(!lockdep_hardirqs_enabled())
WARNING: CPU: 2 PID: 1 at kernel/locking/lockdep.c:5512 check_flags+0x10a/0x1e0
Modules linked in:
CPU: 2 PID: 1 Comm: init Not tainted 6.1.0-rc4-00160-gb56b6e2b4f31 #53
Hardware name: riscv-virtio,qemu (DT)
epc : check_flags+0x10a/0x1e0
ra : check_flags+0x10a/0x1e0
<snip>
status: 0000000200000100 badaddr: 0000000000000000 cause: 0000000000000003
[<ffffffff808edb90>] lock_is_held_type+0x78/0x14e
[<ffffffff8003dae2>] __might_resched+0x26/0x22c
[<ffffffff8003dd24>] __might_sleep+0x3c/0x66
[<ffffffff80022c60>] get_signal+0x9e/0xa70
[<ffffffff800054a2>] do_notify_resume+0x6e/0x422
[<ffffffff80003c68>] ret_from_exception+0x0/0x10
irq event stamp: 44512
hardirqs last enabled at (44511): [<ffffffff808f901c>] _raw_spin_unlock_irqrestore+0x54/0x62
hardirqs last disabled at (44512): [<ffffffff80008200>] __trace_hardirqs_off+0xc/0x14
softirqs last enabled at (44472): [<ffffffff808f9fbe>] __do_softirq+0x3de/0x51e
softirqs last disabled at (44467): [<ffffffff80017760>] irq_exit+0xd6/0x104
---[ end trace 0000000000000000 ]---
possible reason: unannotated irqs-on.

Signed-off-by: Andrew Bresticker <[email protected]>
Fixes: 3c4697982982 ("riscv: Enable LOCKDEP_SUPPORT & fixup TRACE_IRQFLAGS_SUPPORT")
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


# 7ecdadf7 09-Nov-2022 Guo Ren <[email protected]>

riscv: stacktrace: Make walk_stackframe cross pt_regs frame

The current walk_stackframe with FRAME_POINTER would stop unwinding at
ret_from_exception:
BUG: sleeping function called from invalid co

riscv: stacktrace: Make walk_stackframe cross pt_regs frame

The current walk_stackframe with FRAME_POINTER would stop unwinding at
ret_from_exception:
BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1518
in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: init
CPU: 0 PID: 1 Comm: init Not tainted 5.10.113-00021-g15c15974895c-dirty #192
Call Trace:
[<ffffffe0002038c8>] walk_stackframe+0x0/0xee
[<ffffffe000aecf48>] show_stack+0x32/0x4a
[<ffffffe000af1618>] dump_stack_lvl+0x72/0x8e
[<ffffffe000af1648>] dump_stack+0x14/0x1c
[<ffffffe000239ad2>] ___might_sleep+0x12e/0x138
[<ffffffe000239aec>] __might_sleep+0x10/0x18
[<ffffffe000afe3fe>] down_read+0x22/0xa4
[<ffffffe000207588>] do_page_fault+0xb0/0x2fe
[<ffffffe000201b80>] ret_from_exception+0x0/0xc

The optimization would help walk_stackframe cross the pt_regs frame and
get more backtrace of debug info:
BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1518
in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: init
CPU: 0 PID: 1 Comm: init Not tainted 5.10.113-00021-g15c15974895c-dirty #192
Call Trace:
[<ffffffe0002038c8>] walk_stackframe+0x0/0xee
[<ffffffe000aecf48>] show_stack+0x32/0x4a
[<ffffffe000af1618>] dump_stack_lvl+0x72/0x8e
[<ffffffe000af1648>] dump_stack+0x14/0x1c
[<ffffffe000239ad2>] ___might_sleep+0x12e/0x138
[<ffffffe000239aec>] __might_sleep+0x10/0x18
[<ffffffe000afe3fe>] down_read+0x22/0xa4
[<ffffffe000207588>] do_page_fault+0xb0/0x2fe
[<ffffffe000201b80>] ret_from_exception+0x0/0xc
[<ffffffe000613c06>] riscv_intc_irq+0x1a/0x72
[<ffffffe000201b80>] ret_from_exception+0x0/0xc
[<ffffffe00033f44a>] vma_link+0x54/0x160
[<ffffffe000341d7a>] mmap_region+0x2cc/0x4d0
[<ffffffe000342256>] do_mmap+0x2d8/0x3ac
[<ffffffe000326318>] vm_mmap_pgoff+0x70/0xb8
[<ffffffe00032638a>] vm_mmap+0x2a/0x36
[<ffffffe0003cfdde>] elf_map+0x72/0x84
[<ffffffe0003d05f8>] load_elf_binary+0x69a/0xec8
[<ffffffe000376240>] bprm_execve+0x246/0x53a
[<ffffffe00037786c>] kernel_execve+0xe8/0x124
[<ffffffe000aecdf2>] run_init_process+0xfa/0x10c
[<ffffffe000aece16>] try_to_run_init_process+0x12/0x3c
[<ffffffe000afa920>] kernel_init+0xb4/0xf8
[<ffffffe000201b80>] ret_from_exception+0x0/0xc

Here is the error injection test code for the above output:
drivers/irqchip/irq-riscv-intc.c:
static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
{
unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG;
+ u32 tmp; __get_user(tmp, (u32 *)0);

Signed-off-by: Guo Ren <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
[Palmer: use SYM_CODE_*]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.1-rc4, v6.1-rc3
# 7e186433 30-Oct-2022 Jisheng Zhang <[email protected]>

riscv: fix race when vmap stack overflow

Currently, when detecting vmap stack overflow, riscv firstly switches
to the so called shadow stack, then use this shadow stack to call the
get_overflow_stac

riscv: fix race when vmap stack overflow

Currently, when detecting vmap stack overflow, riscv firstly switches
to the so called shadow stack, then use this shadow stack to call the
get_overflow_stack() to get the overflow stack. However, there's
a race here if two or more harts use the same shadow stack at the same
time.

To solve this race, we introduce spin_shadow_stack atomic var, which
will be swap between its own address and 0 in atomic way, when the
var is set, it means the shadow_stack is being used; when the var
is cleared, it means the shadow_stack isn't being used.

Fixes: 31da94c25aea ("riscv: add VMAP_STACK overflow detection")
Signed-off-by: Jisheng Zhang <[email protected]>
Suggested-by: Guo Ren <[email protected]>
Reviewed-by: Guo Ren <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
[Palmer: Add AQ to the swap, and also some comments.]
Signed-off-by: Palmer Dabbelt <[email protected]>

show more ...


Revision tags: v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2
# 24a9c541 08-Jun-2022 Frederic Weisbecker <[email protected]>

context_tracking: Split user tracking Kconfig

Context tracking is going to be used not only to track user transitions
but also idle/IRQs/NMIs. The user tracking part will then become a
separate feat

context_tracking: Split user tracking Kconfig

Context tracking is going to be used not only to track user transitions
but also idle/IRQs/NMIs. The user tracking part will then become a
separate feature. Prepare Kconfig for that.

[ frederic: Apply Max Filippov feedback. ]

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Neeraj Upadhyay <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Nicolas Saenz Julienne <[email protected]>
Cc: Marcelo Tosatti <[email protected]>
Cc: Xiongfeng Wang <[email protected]>
Cc: Yu Liao <[email protected]>
Cc: Phil Auld <[email protected]>
Cc: Paul Gortmaker<[email protected]>
Cc: Alex Belits <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Reviewed-by: Nicolas Saenz Julienne <[email protected]>
Tested-by: Nicolas Saenz Julienne <[email protected]>

show more ...


123