|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6 |
|
| #
2335c9cb |
| 27-Jun-2024 |
Jinjie Ruan <[email protected]> |
ARM: 9407/1: Add support for STACKLEAK gcc plugin
Add the STACKLEAK gcc plugin to arm32 by adding the helper used by stackleak common code: on_thread_stack(). It initialize the stack with the poison
ARM: 9407/1: Add support for STACKLEAK gcc plugin
Add the STACKLEAK gcc plugin to arm32 by adding the helper used by stackleak common code: on_thread_stack(). It initialize the stack with the poison value before returning from system calls which improves the kernel security. Additionally, this disables the plugin in EFI stub code and decompress code, which are out of scope for the protection.
Before the test on Qemu versatilepb board: # echo STACKLEAK_ERASING > /sys/kernel/debug/provoke-crash/DIRECT lkdtm: Performing direct entry STACKLEAK_ERASING lkdtm: XFAIL: stackleak is not supported on this arch (HAVE_ARCH_STACKLEAK=n)
After: # echo STACKLEAK_ERASING > /sys/kernel/debug/provoke-crash/DIRECT lkdtm: Performing direct entry STACKLEAK_ERASING lkdtm: stackleak stack usage: high offset: 80 bytes current: 280 bytes lowest: 696 bytes tracked: 696 bytes untracked: 192 bytes poisoned: 7220 bytes low offset: 4 bytes lkdtm: OK: the rest of the thread stack is properly erased
Signed-off-by: Jinjie Ruan <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6 |
|
| #
cf007647 |
| 10-Aug-2023 |
Kees Cook <[email protected]> |
ARM: ptrace: Restore syscall restart tracing
Since commit 4e57a4ddf6b0 ("ARM: 9107/1: syscall: always store thread_info->abi_syscall"), the seccomp selftests "syscall_restart" has been broken. This
ARM: ptrace: Restore syscall restart tracing
Since commit 4e57a4ddf6b0 ("ARM: 9107/1: syscall: always store thread_info->abi_syscall"), the seccomp selftests "syscall_restart" has been broken. This was caused by the restart syscall not being stored to "abi_syscall" during restart setup before branching to the "local_restart" label. Tracers would see the wrong syscall, and scno would get overwritten while returning from the TIF_WORK path. Add the missing store.
Cc: Russell King <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Lecopzer Chen <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: [email protected] Fixes: 4e57a4ddf6b0 ("ARM: 9107/1: syscall: always store thread_info->abi_syscall") Reviewed-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1 |
|
| #
a9ff6961 |
| 02-Jun-2022 |
Linus Walleij <[email protected]> |
ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explici
ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings.
Doing this is a bit intrusive: virt_to_pfn() requires PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in <asm/page.h>, so this must be included *before* <asm/memory.h>.
The use of macros were obscuring the unclear inclusion order here, as the macros would eventually be resolved, but a static inline like this cannot be compiled with unresolved macros.
The naive solution to include <asm/page.h> at the top of <asm/memory.h> does not work, because <asm/memory.h> sometimes includes <asm/page.h> at the end of itself, which would create a confusing inclusion loop. So instead, take the approach to always unconditionally include <asm/page.h> at the end of <asm/memory.h>
arch/arm uses <asm/memory.h> explicitly in a lot of places, however it turns out that if we just unconditionally include <asm/memory.h> into <asm/page.h> and switch all inclusions of <asm/memory.h> to <asm/page.h> instead, we enforce the right order and <asm/memory.h> will always have access to the definitions.
Put an inclusion guard in place making it impossible to include <asm/memory.h> explicitly.
Link: https://lore.kernel.org/linux-mm/[email protected]/ Signed-off-by: Linus Walleij <[email protected]>
show more ...
|
| #
b91a69d1 |
| 29-Sep-2022 |
Arnd Bergmann <[email protected]> |
ARM: iop32x: remove the platform
This was marked as unused in 5.19 and can now be removed
Cc: Lennert Buytenhek <[email protected]> Cc: Martin Michlmayr <[email protected]> Acked-by: Dan Williams
ARM: iop32x: remove the platform
This was marked as unused in 5.19 and can now be removed
Cc: Lennert Buytenhek <[email protected]> Cc: Martin Michlmayr <[email protected]> Acked-by: Dan Williams <[email protected]> Acked-by: Wolfram Sang <[email protected]> # for I2C Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
| #
29589ca0 |
| 31-May-2022 |
Ard Biesheuvel <[email protected]> |
ARM: 9208/1: entry: add .ltorg directive to keep literals in range
LKP reports a build issue on Clang, related to a literal load of __current issued through the ldr_va macro. This turns out to be du
ARM: 9208/1: entry: add .ltorg directive to keep literals in range
LKP reports a build issue on Clang, related to a literal load of __current issued through the ldr_va macro. This turns out to be due to the fact that group relocations are disabled when CONFIG_COMPILE_TEST=y, which means that the ldr_va macro resolves to a pair of LDR instructions, the first one being a literal load issued too far from its literal pool.
Due to the introduction of a couple of new uses of this macro in commit 508074607c7b95b2 ("ARM: 9195/1: entry: avoid explicit literal loads"), the literal pools end up getting rearranged in a way that causes the literal for __current to go out of range. Let's fix this up by putting a .ltorg directive in a suitable place in the code.
Link: https://lore.kernel.org/all/[email protected]/
Fixes: 508074607c7b95b2 ("ARM: 9195/1: entry: avoid explicit literal loads") Reported-by: kernel test robot <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Tested-by: Nathan Chancellor <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
| #
24a9c541 |
| 08-Jun-2022 |
Frederic Weisbecker <[email protected]> |
context_tracking: Split user tracking Kconfig
Context tracking is going to be used not only to track user transitions but also idle/IRQs/NMIs. The user tracking part will then become a separate feat
context_tracking: Split user tracking Kconfig
Context tracking is going to be used not only to track user transitions but also idle/IRQs/NMIs. The user tracking part will then become a separate feature. Prepare Kconfig for that.
[ frederic: Apply Max Filippov feedback. ]
Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Neeraj Upadhyay <[email protected]> Cc: Uladzislau Rezki <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Nicolas Saenz Julienne <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Xiongfeng Wang <[email protected]> Cc: Yu Liao <[email protected]> Cc: Phil Auld <[email protected]> Cc: Paul Gortmaker<[email protected]> Cc: Alex Belits <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Nicolas Saenz Julienne <[email protected]> Tested-by: Nicolas Saenz Julienne <[email protected]>
show more ...
|
|
Revision tags: v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4 |
|
| #
892c608a |
| 20-Apr-2022 |
Ard Biesheuvel <[email protected]> |
ARM: 9199/1: spectre-bhb: use local DSB and elide ISB in loop8 sequence
The loop8 mitigation for Spectre-BHB only requires a CPU local DSB rather than a systemwide one, which is much more costly. An
ARM: 9199/1: spectre-bhb: use local DSB and elide ISB in loop8 sequence
The loop8 mitigation for Spectre-BHB only requires a CPU local DSB rather than a systemwide one, which is much more costly. And by the same reasoning as why it is justified to omit the ISB after BPIALL, we can also elide the ISB and rely on the exception return for the context synchronization.
Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
| #
50807460 |
| 20-Apr-2022 |
Ard Biesheuvel <[email protected]> |
ARM: 9195/1: entry: avoid explicit literal loads
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into registers without having to rely on literal loads that go via the D-cache. For o
ARM: 9195/1: entry: avoid explicit literal loads
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into registers without having to rely on literal loads that go via the D-cache. For older cores, we now support a similar arrangement, based on PC-relative group relocations.
This means we can elide most literal loads entirely from the entry path, by switching to the ldr_va macro to emit the appropriate sequence depending on the target architecture revision.
While at it, switch to the bl_r macro for invoking the right PABT/DABT helpers instead of setting the LR register explicitly, which does not play well with cores that speculate across function returns.
Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4 |
|
| #
b9baf5c8 |
| 10-Feb-2022 |
Russell King (Oracle) <[email protected]> |
ARM: Spectre-BHB workaround
Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by S
ARM: Spectre-BHB workaround
Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by Spectre V2 in the same ways as Cortex-A15.
Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4 |
|
| #
6f5d248d |
| 30-Nov-2021 |
Arnd Bergmann <[email protected]> |
ARM: iop32x: use GENERIC_IRQ_MULTI_HANDLER
iop32x uses the entry-macro.S file for both the IRQ entry and for hooking into the arch_ret_to_user code path. This is done because the cp6 registers have
ARM: iop32x: use GENERIC_IRQ_MULTI_HANDLER
iop32x uses the entry-macro.S file for both the IRQ entry and for hooking into the arch_ret_to_user code path. This is done because the cp6 registers have to be enabled before accessing any of the interrupt controller registers but have to be disabled when running in user space.
There is also a lazy-enable logic in cp6.c, but during a hardirq, we know it has to be enabled.
Both the cp6-enable code and the code to read the IRQ status can be lifted into the normal generic_handle_arch_irq() path, but the cp6-disable code has to remain in the user return code. As nothing other than iop32x uses this hook, just open-code it there with an ifdef for the platform that can eventually be removed when iop32x has reached the end of its life.
The cp6-enable path in the IRQ entry has an extra cp_wait barrier that the trap version does not have, but it is harmless to do it in both cases to simplify the logic here at the cost of a few extra cycles for the trap.
Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Tested-by: Marc Zyngier <[email protected]> Tested-by: Vladimir Murzin <[email protected]> # ARMv7M
show more ...
|
|
Revision tags: v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2 |
|
| #
50596b75 |
| 18-Sep-2021 |
Ard Biesheuvel <[email protected]> |
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while ru
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while running in the kernel. This removes the need to access it via thread_info, which is located at the base of the stack, but will be moved out of there in a subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will help GCC understand that reloading the value within the same function is not necessary, even when using the per-task stack protector (which also generates accesses via the TLS register). For example, the generated code below loads TPIDRURO only once, and uses it to access both the stack canary and the preempt_count fields.
<do_one_initcall>: e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr} ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3} 4606 mov r6, r0 b094 sub sp, #80 ; 0x50 f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary 9313 str r3, [sp, #76] ; 0x4c f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <[email protected]> Signed-off-by: Keith Packard <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6 |
|
| #
8ac6f5d7 |
| 11-Aug-2021 |
Arnd Bergmann <[email protected]> |
ARM: 9113/1: uaccess: remove set_fs() implementation
There are no remaining callers of set_fs(), so just remove it along with all associated code that operates on thread_info->addr_limit.
There are
ARM: 9113/1: uaccess: remove set_fs() implementation
There are no remaining callers of set_fs(), so just remove it along with all associated code that operates on thread_info->addr_limit.
There are still further optimizations that can be done:
- In get_user(), the address check could be moved entirely into the out of line code, rather than passing a constant as an argument,
- I assume the DACR handling can be simplified as we now only change it during user access when CONFIG_CPU_SW_DOMAIN_PAN is set, but not during set_fs().
Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
| #
4e57a4dd |
| 11-Aug-2021 |
Arnd Bergmann <[email protected]> |
ARM: 9107/1: syscall: always store thread_info->abi_syscall
The system call number is used in a a couple of places, in particular ptrace, seccomp and /proc/<pid>/syscall.
The last one apparently ne
ARM: 9107/1: syscall: always store thread_info->abi_syscall
The system call number is used in a a couple of places, in particular ptrace, seccomp and /proc/<pid>/syscall.
The last one apparently never worked reliably on ARM for tasks that are not currently getting traced.
Storing the syscall number in the normal entry path makes it work, as well as allowing us to see if the current system call is for OABI compat mode, which is the next thing I want to hook into.
Since the thread_info->syscall field is not just the number any more, it is now renamed to abi_syscall. In kernels that enable both OABI and EABI, the upper bits of this field encode 0x900000 (__NR_OABI_SYSCALL_BASE) for OABI tasks, while normal EABI tasks do not set the upper bits. This makes it possible to implement the in_oabi_syscall() helper later.
All other users of thread_info->syscall go through the syscall_get_nr() helper, which in turn filters out the ABI bits.
Note that the ABI information is lost with PTRACE_SET_SYSCALL, so one cannot set the internal number to a particular version, but this was already the case. We could change it to let gdb encode the ABI type along with the syscall in a CONFIG_OABI_COMPAT-enabled kernel, but that itself would be a (backwards-compatible) ABI change, so I don't do it here.
Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2 |
|
| #
0047eb9f |
| 01-Mar-2021 |
Masahiro Yamada <[email protected]> |
ARM: 9068/1: syscalls: switch to generic syscalltbl.sh
Many architectures duplicate similar shell scripts.
This commit converts ARM to use scripts/syscalltbl.sh.
Signed-off-by: Masahiro Yamada <ma
ARM: 9068/1: syscalls: switch to generic syscalltbl.sh
Many architectures duplicate similar shell scripts.
This commit converts ARM to use scripts/syscalltbl.sh.
Signed-off-by: Masahiro Yamada <[email protected]> Acked-by: Linus Walleij <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9 |
|
| #
32d59773 |
| 09-Oct-2020 |
Jens Axboe <[email protected]> |
arm: add support for TIF_NOTIFY_SIGNAL
Wire up TIF_NOTIFY_SIGNAL handling for arm.
Cc: [email protected] Acked-by: Russell King <[email protected]> Signed-off-by: Jens A
arm: add support for TIF_NOTIFY_SIGNAL
Wire up TIF_NOTIFY_SIGNAL handling for arm.
Cc: [email protected] Acked-by: Russell King <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
c12366ba |
| 25-Oct-2020 |
Linus Walleij <[email protected]> |
ARM: 9015/2: Define the virtual space of KASan's shadow region
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for the Arm kernel address sanitizer. We are "stealing" lowmem (the
ARM: 9015/2: Define the virtual space of KASan's shadow region
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB addressable by a 32bit architecture) out of the virtual address space to use as shadow memory for KASan as follows:
+----+ 0xffffffff | | | | |-> Static kernel image (vmlinux) BSS and page table | |/ +----+ PAGE_OFFSET | | | | |-> Loadable kernel modules virtual address space area | |/ +----+ MODULES_VADDR = KASAN_SHADOW_END | | | | |-> The shadow area of kernel virtual address. | |/ +----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the | | shadow address of MODULES_VADDR | | | | | | | | |-> The user space area in lowmem. The kernel address | | | sanitizer do not use this space, nor does it map it. | | | | | | | | | | | | | |/ ------ 0
0 .. TASK_SIZE is the memory that can be used by shared userspace/kernelspace. It us used for userspace processes and for passing parameters and memory buffers in system calls etc. We do not need to shadow this area.
KASAN_SHADOW_START: This value begins with the MODULE_VADDR's shadow address. It is the start of kernel virtual space. Since we have modules to load, we need to cover also that area with shadow memory so we can find memory bugs in modules.
KASAN_SHADOW_END This value is the 0x100000000's shadow address: the mapping that would be after the end of the kernel memory at 0xffffffff. It is the end of kernel address sanitizer shadow area. It is also the start of the module area.
KASAN_SHADOW_OFFSET: This value is used to map an address to the corresponding shadow address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
As you would expect, >> 3 is equal to dividing by 8, meaning each byte in the shadow memory covers 8 bytes of kernel memory, so one bit shadow memory per byte of kernel memory is used.
The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending on the VMSPLIT layout of the system: the kernel and userspace can split up lowmem in different ways according to needs, so we calculate the shadow offset depending on this.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit rotated constant, so we need to modify the TASK_SIZE access code in the *.s file.
The kernel and modules may use different amounts of memory, according to the VMSPLIT configuration, which in turn determines the PAGE_OFFSET.
We use the following KASAN_SHADOW_OFFSETs depending on how the virtual memory is split up:
- 0x1f000000 if we have 1G userspace / 3G kernelspace split: - The kernel address space is 3G (0xc0000000) - PAGE_OFFSET is then set to 0x40000000 so the kernel static image (vmlinux) uses addresses 0x40000000 .. 0xffffffff - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x3f000000 so the modules use addresses 0x3f000000 .. 0x3fffffff - So the addresses 0x3f000000 .. 0xffffffff need to be covered with shadow memory. That is 0xc1000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x18200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x26e00000, to KASAN_SHADOW_END at 0x3effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x3f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3) KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000 KASAN_SHADOW_OFFSET = 0x1f000000
- 0x5f000000 if we have 2G userspace / 2G kernelspace split: - The kernel space is 2G (0x80000000) - PAGE_OFFSET is set to 0x80000000 so the kernel static image uses 0x80000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x7f000000 so the modules use addresses 0x7f000000 .. 0x7fffffff - So the addresses 0x7f000000 .. 0xffffffff need to be covered with shadow memory. That is 0x81000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x10200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x6ee00000, to KASAN_SHADOW_END at 0x7effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x7f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3) KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000 KASAN_SHADOW_OFFSET = 0x5f000000
- 0x9f000000 if we have 3G userspace / 1G kernelspace split, and this is the default split for ARM: - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xc0000000 so the kernel static image uses 0xc0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xbf000000 so the modules use addresses 0xbf000000 .. 0xbfffffff - So the addresses 0xbf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x41000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x08200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xb6e00000, to KASAN_SHADOW_END at 0xbfffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xbf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3) KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000 KASAN_SHADOW_OFFSET = 0x9f000000
- 0x8f000000 if we have 3G userspace / 1G kernelspace with full 1 GB low memory (VMSPLIT_3G_OPT): - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xb0000000 so the kernel static image uses 0xb0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xaf000000 so the modules use addresses 0xaf000000 .. 0xaffffff - So the addresses 0xaf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x51000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x0a200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xa4e00000, to KASAN_SHADOW_END at 0xaeffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xaf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3) KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000 KASAN_SHADOW_OFFSET = 0x8f000000
- The default value of 0xffffffff for KASAN_SHADOW_OFFSET is an error value. We should always match one of the above shadow offsets.
When we do this, TASK_SIZE will sometimes get a bit odd values that will not fit into immediate mov assembly instructions. To account for this, we need to rewrite some assembly using TASK_SIZE like this:
- mov r1, #TASK_SIZE + ldr r1, =TASK_SIZE
or
- cmp r4, #TASK_SIZE + ldr r0, =TASK_SIZE + cmp r4, r0
this is done to avoid the immediate #TASK_SIZE that need to fit into a limited number of bits.
Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <[email protected]> # Brahma SoCs Tested-by: Ahmad Fatoum <[email protected]> # i.MX6Q Reported-by: Ard Biesheuvel <[email protected]> Signed-off-by: Abbott Liu <[email protected]> Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Linus Walleij <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
d2912cb1 |
| 04-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7 |
|
| #
e44fc388 |
| 17-Feb-2019 |
Stefan Agner <[email protected]> |
ARM: 8844/1: use unified assembler in assembly files
Use unified assembler syntax (UAL) in assembly files. Divided syntax is considered deprecated. This will also allow to build the kernel using LLV
ARM: 8844/1: use unified assembler in assembly files
Use unified assembler syntax (UAL) in assembly files. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler.
Signed-off-by: Stefan Agner <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8 |
|
| #
f18aef74 |
| 08-Oct-2018 |
Timothy E Baldwin <[email protected]> |
ARM: 8802/1: Call syscall_trace_exit even when system call skipped
On at least x86 and ARM64, and as documented in the ptrace man page a skipped system call will still cause a syscall exit ptrace st
ARM: 8802/1: Call syscall_trace_exit even when system call skipped
On at least x86 and ARM64, and as documented in the ptrace man page a skipped system call will still cause a syscall exit ptrace stop.
Previous to this commit 32-bit ARM did not, resulting in strace being confused when seccomp skips system calls.
This change also impacts programs that use ptrace to skip system calls.
Fixes: ad75b51459ae ("ARM: 7579/1: arch/allow a scno of -1 to not cause a SIGILL") Signed-off-by: Timothy E Baldwin <[email protected]> Signed-off-by: Eugene Syromyatnikov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Tested-by: Kees Cook <[email protected]> Tested-by: Eugene Syromyatnikov <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5 |
|
| #
afc9f65e |
| 13-Jul-2018 |
Vincent Whitchurch <[email protected]> |
ARM: 8781/1: Fix Thumb-2 syscall return for binutils 2.29+
When building the kernel as Thumb-2 with binutils 2.29 or newer, if the assembler has seen the .type directive (via ENDPROC()) for a symbol
ARM: 8781/1: Fix Thumb-2 syscall return for binutils 2.29+
When building the kernel as Thumb-2 with binutils 2.29 or newer, if the assembler has seen the .type directive (via ENDPROC()) for a symbol, it automatically handles the setting of the lowest bit when the symbol is used with ADR. The badr macro on the other hand handles this lowest bit manually. This leads to a jump to a wrong address in the wrong state in the syscall return path:
Internal error: Oops - undefined instruction: 0 [#2] SMP THUMB2 Modules linked in: CPU: 0 PID: 652 Comm: modprobe Tainted: G D 4.18.0-rc3+ #8 PC is at ret_fast_syscall+0x4/0x62 LR is at sys_brk+0x109/0x128 pc : [<80101004>] lr : [<801c8a35>] psr: 60000013 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 50c5387d Table: 9e82006a DAC: 00000051 Process modprobe (pid: 652, stack limit = 0x(ptrval))
80101000 <ret_fast_syscall>: 80101000: b672 cpsid i 80101002: f8d9 2008 ldr.w r2, [r9, #8] 80101006: f1b2 4ffe cmp.w r2, #2130706432 ; 0x7f000000
80101184 <local_restart>: 80101184: f8d9 a000 ldr.w sl, [r9] 80101188: e92d 0030 stmdb sp!, {r4, r5} 8010118c: f01a 0ff0 tst.w sl, #240 ; 0xf0 80101190: d117 bne.n 801011c2 <__sys_trace> 80101192: 46ba mov sl, r7 80101194: f5ba 7fc8 cmp.w sl, #400 ; 0x190 80101198: bf28 it cs 8010119a: f04f 0a00 movcs.w sl, #0 8010119e: f3af 8014 nop.w {20} 801011a2: f2af 1ea2 subw lr, pc, #418 ; 0x1a2
To fix this, add a new symbol name which doesn't have ENDPROC used on it and use that with badr. We can't remove the badr usage since that would would cause breakage with older binutils.
Signed-off-by: Vincent Whitchurch <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17 |
|
| #
b74406f3 |
| 02-Jun-2018 |
Mathieu Desnoyers <[email protected]> |
arm: Add syscall detection for restartable sequences
Syscalls are not allowed inside restartable sequences, so add a call to rseq_syscall() at the very beginning of system call exiting path for CONF
arm: Add syscall detection for restartable sequences
Syscalls are not allowed inside restartable sequences, so add a call to rseq_syscall() at the very beginning of system call exiting path for CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect whether there is a syscall issued inside restartable sequences.
Signed-off-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dave Watson <[email protected]> Cc: Will Deacon <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "H . Peter Anvin" <[email protected]> Cc: Chris Lameter <[email protected]> Cc: Russell King <[email protected]> Cc: Andrew Hunter <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: "Paul E . McKenney" <[email protected]> Cc: Paul Turner <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Josh Triplett <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ben Maurer <[email protected]> Cc: [email protected] Cc: Andy Lutomirski <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v4.17-rc7, v4.17-rc6, v4.17-rc5 |
|
| #
10573ae5 |
| 11-May-2018 |
Russell King <[email protected]> |
ARM: spectre-v1: fix syscall entry
Prevent speculation at the syscall table decoding by clamping the index used to zero on invalid system call numbers, and using the csdb speculative barrier.
Signe
ARM: spectre-v1: fix syscall entry
Prevent speculation at the syscall table decoding by clamping the index used to zero on invalid system call numbers, and using the csdb speculative barrier.
Signed-off-by: Russell King <[email protected]> Acked-by: Mark Rutland <[email protected]> Boot-tested-by: Tony Lindgren <[email protected]> Reviewed-by: Tony Lindgren <[email protected]>
show more ...
|
|
Revision tags: v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1 |
|
| #
c6089061 |
| 24-Nov-2017 |
Russell King <[email protected]> |
ARM: probes: avoid adding kprobes to sensitive kernel-entry/exit code
Avoid adding kprobes to any of the kernel entry/exit or startup assembly code, or code in the identity-mapped region. This code
ARM: probes: avoid adding kprobes to sensitive kernel-entry/exit code
Avoid adding kprobes to any of the kernel entry/exit or startup assembly code, or code in the identity-mapped region. This code does not conform to the standard C conventions, which means that the expectations of the kprobes code is not forfilled.
Placing kprobes at some of these locations results in the kernel trying to return to userspace addresses while retaining the CPU in kernel mode.
Tested-by: Naresh Kamboju <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1 |
|
| #
6022f80d |
| 04-Sep-2017 |
Vladimir Murzin <[email protected]> |
ARM: 8695/1: entry: Remove dead code in sys_mmap2
We support page size of 4K only, remove dead code.
Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <rmk+kernel
ARM: 8695/1: entry: Remove dead code in sys_mmap2
We support page size of 4K only, remove dead code.
Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
e33f8d32 |
| 07-Sep-2017 |
Thomas Garnier <[email protected]> |
arm/syscalls: Optimize address limit check
Disable the generic address limit check in favor of an architecture specific optimized implementation. The generic implementation using pending work flags
arm/syscalls: Optimize address limit check
Disable the generic address limit check in favor of an architecture specific optimized implementation. The generic implementation using pending work flags did not work well with ARM and alignment faults.
The address limit is checked on each syscall return path to user-mode path as well as the irq user-mode return function. If the address limit was changed, a function is called to report data corruption (stopping the kernel or process based on configuration).
The address limit check has to be done before any pending work because they can reset the address limit and the process is killed using a SIGKILL signal. For example the lkdtm address limit check does not work because the signal to kill the process will reset the user-mode address limit.
Signed-off-by: Thomas Garnier <[email protected]> Signed-off-by: Kees Cook <[email protected]> Tested-by: Kees Cook <[email protected]> Tested-by: Leonard Crestez <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Pratyush Anand <[email protected]> Cc: Dave Martin <[email protected]> Cc: Will Drewry <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Russell King <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: David Howells <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Al Viro <[email protected]> Cc: [email protected] Cc: Yonghong Song <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
show more ...
|