|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4 |
|
| #
c077711f |
| 17-Oct-2024 |
Suzuki K Poulose <[email protected]> |
arm64: Detect if in a realm and set RIPAS RAM
Detect that the VM is a realm guest by the presence of the RSI interface. This is done after PSCI has been initialised so that we can check the SMCCC co
arm64: Detect if in a realm and set RIPAS RAM
Detect that the VM is a realm guest by the presence of the RSI interface. This is done after PSCI has been initialised so that we can check the SMCCC conduit before making any RSI calls.
If in a realm then iterate over all memory ensuring that it is marked as RIPAS RAM. The loader is required to do this for us, however if some memory is missed this will cause the guest to receive a hard to debug external abort at some random point in the future. So for a belt-and-braces approach set all memory to RIPAS RAM. Any failure here implies that the RAM regions passed to Linux are incorrect so panic() promptly to make the situation clear.
Reviewed-by: Gavin Shan <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Co-developed-by: Steven Price <[email protected]> Signed-off-by: Steven Price <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3 |
|
| #
99e7a8ad |
| 05-Jun-2024 |
Sudeep Holla <[email protected]> |
arm64: cpuidle: Move ACPI specific code into drivers/acpi/arm64/
The ACPI cpuidle LPI FFH code can be moved out of arm64 arch code as it just uses SMCCC. Move all the ACPI cpuidle LPI FFH code into
arm64: cpuidle: Move ACPI specific code into drivers/acpi/arm64/
The ACPI cpuidle LPI FFH code can be moved out of arm64 arch code as it just uses SMCCC. Move all the ACPI cpuidle LPI FFH code into drivers/acpi/arm64/cpuidle.c
Signed-off-by: Sudeep Holla <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Hanjun Guo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2 |
|
| #
443cbaf9 |
| 24-Jan-2024 |
Baoquan He <[email protected]> |
crash: split vmcoreinfo exporting code out from crash_core.c
Now move the relevant codes into separate files: kernel/crash_reserve.c, include/linux/crash_reserve.h.
And add config item CRASH_RESERV
crash: split vmcoreinfo exporting code out from crash_core.c
Now move the relevant codes into separate files: kernel/crash_reserve.c, include/linux/crash_reserve.h.
And add config item CRASH_RESERVE to control its enabling.
And also update the old ifdeffery of CONFIG_CRASH_CORE, including of <linux/crash_core.h> and config item dependency on CRASH_CORE accordingly.
And also do renaming as follows: - arch/xxx/kernel/{crash_core.c => vmcore_info.c} because they are only related to vmcoreinfo exporting on x86, arm64, riscv.
And also Remove config item CRASH_CORE, and rely on CONFIG_KEXEC_CORE to decide if build in crash_core.c.
[[email protected]: remove duplicated include in vmcore_info.c] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Baoquan He <[email protected]> Signed-off-by: Yang Li <[email protected]> Acked-by: Hari Bathini <[email protected]> Cc: Al Viro <[email protected]> Cc: Eric W. Biederman <[email protected]> Cc: Pingfan Liu <[email protected]> Cc: Klara Modin <[email protected]> Cc: Michael Kelley <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Yang Li <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
8a6e40e1 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: move dynamic shadow call stack patching into early C runtime
Once we update the early kernel mapping code to only map the kernel once with the right permissions, we can no longer perfor
arm64: head: move dynamic shadow call stack patching into early C runtime
Once we update the early kernel mapping code to only map the kernel once with the right permissions, we can no longer perform code patching via this mapping.
So move this code to an earlier stage of the boot, right after applying the relocations.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
e223a449 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: idreg-override: Move to early mini C runtime
We will want to parse the ID register overrides even earlier, so that we can take them into account before creating the kernel mapping. So migrate
arm64: idreg-override: Move to early mini C runtime
We will want to parse the ID register overrides even earlier, so that we can take them into account before creating the kernel mapping. So migrate the code and make it work in the context of the early C runtime. We will move the invocation to an earlier stage in a subsequent patch.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
734958ef |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: move relocation handling to C code
Now that we have a mini C runtime before the kernel mapping is up, we can move the non-trivial relocation processing code out of head.S and reimplemen
arm64: head: move relocation handling to C code
Now that we have a mini C runtime before the kernel mapping is up, we can move the non-trivial relocation processing code out of head.S and reimplement it in C.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
d104a6fe |
| 23-Jan-2024 |
Ard Biesheuvel <[email protected]> |
arm64: scs: Disable LTO for SCS patching code
Full LTO takes the '-mbranch-protection=none' passed to the compiler when generating the dynamic shadow call stack patching code as a hint to stop emitt
arm64: scs: Disable LTO for SCS patching code
Full LTO takes the '-mbranch-protection=none' passed to the compiler when generating the dynamic shadow call stack patching code as a hint to stop emitting PAC instructions altogether. (Thin LTO appears unaffected by this)
Work around this by disabling LTO for the compilation unit, which appears to convince the linker that it should still use PAC in the rest of the kernel..
Fixes: 3b619e22c460 ("arm64: implement dynamic shadow call stack for Clang") Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Tested-by: Sami Tolvanen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
2fa28abd |
| 23-Jan-2024 |
Ard Biesheuvel <[email protected]> |
arm64: Revert "scs: Work around full LTO issue with dynamic SCS"
This reverts commit 8c5a19cb17a71e ("arm64: scs: Work around full LTO issue with dynamic SCS"), which did not quite fix the issue as
arm64: Revert "scs: Work around full LTO issue with dynamic SCS"
This reverts commit 8c5a19cb17a71e ("arm64: scs: Work around full LTO issue with dynamic SCS"), which did not quite fix the issue as intended. Apparently, -fno-unwind-tables is ignored for the final full LTO link when it is set on any of the objects, resulting in an early boot crash due to the SCS patching code patching itself, and attempting to pop the return address from the shadow stack while the associated push was still a PACIASP instruction when it executed.
Reported-by: Sami Tolvanen <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Tested-by: Sami Tolvanen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc1 |
|
| #
8c5a19cb |
| 10-Jan-2024 |
Ard Biesheuvel <[email protected]> |
arm64: scs: Work around full LTO issue with dynamic SCS
Full LTO takes the '-mbranch-protection=none' passed to the compiler when generating the dynamic shadow call stack patching code as a hint to
arm64: scs: Work around full LTO issue with dynamic SCS
Full LTO takes the '-mbranch-protection=none' passed to the compiler when generating the dynamic shadow call stack patching code as a hint to stop emitting PAC instructions altogether. (Thin LTO appears unaffected by this)
Work around this by stripping unwind tables from the object in question, which should be sufficient to prevent the patching code from attempting to patch itself.
Fixes: 3b619e22c460 ("arm64: implement dynamic shadow call stack for Clang") Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3 |
|
| #
94946f9e |
| 19-May-2023 |
Lecopzer Chen <[email protected]> |
arm64: add hw_nmi_get_sample_period for preparation of lockup detector
Set safe maximum CPU frequency to 5 GHz in case a particular platform doesn't implement cpufreq driver. Although, architecture
arm64: add hw_nmi_get_sample_period for preparation of lockup detector
Set safe maximum CPU frequency to 5 GHz in case a particular platform doesn't implement cpufreq driver. Although, architecture doesn't put any restrictions on maximum frequency but 5 GHz seems to be safe maximum given the available Arm CPUs in the market which are clocked much less than 5 GHz.
On the other hand, we can't make it much higher as it would lead to a large hard-lockup detection timeout on parts which are running slower (eg. 1GHz on Developerbox) and doesn't possess a cpufreq driver.
Link: https://lkml.kernel.org/r/20230519101840.v5.17.Ia9d02578e89c3f44d3cb12eec8b0176603c8ab2f@changeid Co-developed-by: Sumit Garg <[email protected]> Signed-off-by: Sumit Garg <[email protected]> Co-developed-by: Pingfan Liu <[email protected]> Signed-off-by: Pingfan Liu <[email protected]> Signed-off-by: Lecopzer Chen <[email protected]> Signed-off-by: Douglas Anderson <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chen-Yu Tsai <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Colin Cross <[email protected]> Cc: Daniel Thompson <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Guenter Roeck <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Masayoshi Mizuma <[email protected]> Cc: Matthias Kaehlcke <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: "Ravi V. Shankar" <[email protected]> Cc: Ricardo Neri <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Tzung-Bi Shih <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
ea3752ba |
| 30-May-2023 |
Mark Rutland <[email protected]> |
arm64: module: mandate MODULE_PLTS
Contemporary kernels and modules can be relatively large, especially when common debug options are enabled. Using GCC 12.1.0, a v6.3-rc7 defconfig kernel is ~38M,
arm64: module: mandate MODULE_PLTS
Contemporary kernels and modules can be relatively large, especially when common debug options are enabled. Using GCC 12.1.0, a v6.3-rc7 defconfig kernel is ~38M, and with PROVE_LOCKING + KASAN_INLINE enabled this expands to ~117M. Shanker reports [1] that the NVIDIA GPU driver alone can consume 110M of module space in some configurations.
Both KASLR and ARM64_ERRATUM_843419 select MODULE_PLTS, so anyone wanting a kernel to have KASLR or run on Cortex-A53 will have MODULE_PLTS selected. This is the case in defconfig and distribution kernels (e.g. Debian, Android, etc).
Practically speaking, this means we're very likely to need MODULE_PLTS and while it's almost guaranteed that MODULE_PLTS will be selected, it is possible to disable support, and we have to maintain some awkward special cases for such unusual configurations.
This patch removes the MODULE_PLTS config option, with the support code always enabled if MODULES is selected. This results in a slight simplification, and will allow for further improvement in subsequent patches.
For any config which currently selects MODULE_PLTS, there will be no functional change as a result of this patch.
[1] https://lore.kernel.org/linux-arm-kernel/[email protected]/
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Cc: Shanker Donthineni <[email protected]> Cc: Will Deacon <[email protected]> Tested-by: Shanker Donthineni <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3 |
|
| #
7755cec6 |
| 17-Mar-2023 |
Marc Zyngier <[email protected]> |
arm64: perf: Move PMUv3 driver to drivers/perf
Having the ARM PMUv3 driver sitting in arch/arm64/kernel is getting in the way of being able to use perf on ARMv8 cores running a 32bit kernel, such as
arm64: perf: Move PMUv3 driver to drivers/perf
Having the ARM PMUv3 driver sitting in arch/arm64/kernel is getting in the way of being able to use perf on ARMv8 cores running a 32bit kernel, such as 32bit KVM guests.
This patch moves it into drivers/perf/arm_pmuv3.c, with an include file in include/linux/perf/arm_pmuv3.h. The only thing left in arch/arm64 is some mundane perf stuff.
Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Zaid Al-Bassam <[email protected]> Tested-by: Florian Fainelli <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3 |
|
| #
3b619e22 |
| 27-Oct-2022 |
Ard Biesheuvel <[email protected]> |
arm64: implement dynamic shadow call stack for Clang
Implement dynamic shadow call stack support on Clang, by parsing the unwind tables at init time to locate all occurrences of PACIASP/AUTIASP inst
arm64: implement dynamic shadow call stack for Clang
Implement dynamic shadow call stack support on Clang, by parsing the unwind tables at init time to locate all occurrences of PACIASP/AUTIASP instructions, and replacing them with the shadow call stack push and pop instructions, respectively.
This is useful because the overhead of the shadow call stack is difficult to justify on hardware that implements pointer authentication (PAC), and given that the PAC instructions are executed as NOPs on hardware that doesn't, we can just replace them without breaking anything. As PACIASP/AUTIASP are guaranteed to be paired with respect to manipulations of the return address, replacing them 1:1 with shadow call stack pushes and pops is guaranteed to result in the desired behavior.
Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Tested-by: Sami Tolvanen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc2 |
|
| #
4ef80609 |
| 17-Oct-2022 |
Ard Biesheuvel <[email protected]> |
arm64: efi: Move efi-entry.S into the libstub source directory
We will be sharing efi-entry.S with the zboot decompressor build, which does not link against vmlinux directly. So move it into the lib
arm64: efi: Move efi-entry.S into the libstub source directory
We will be sharing efi-entry.S with the zboot decompressor build, which does not link against vmlinux directly. So move it into the libstub source directory so we can include in the libstub static library.
Signed-off-by: Ard Biesheuvel <[email protected]> Acked-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc1, v6.0, v6.0-rc7 |
|
| #
32164845 |
| 24-Sep-2022 |
Masahiro Yamada <[email protected]> |
kbuild: use obj-y instead extra-y for objects placed at the head
The objects placed at the head of vmlinux need special treatments:
- arch/$(SRCARCH)/Makefile adds them to head-y in order to place
kbuild: use obj-y instead extra-y for objects placed at the head
The objects placed at the head of vmlinux need special treatments:
- arch/$(SRCARCH)/Makefile adds them to head-y in order to place them before other archives in the linker command line.
- arch/$(SRCARCH)/kernel/Makefile adds them to extra-y instead of obj-y to avoid them going into built-in.a.
This commit gets rid of the latter.
Create vmlinux.a to collect all the objects that are unconditionally linked to vmlinux. The objects listed in head-y are moved to the head of vmlinux.a by using 'ar m'.
With this, arch/$(SRCARCH)/kernel/Makefile can consistently use obj-y for builtin objects.
There is no *.o that is directly linked to vmlinux. Drop unneeded code in scripts/clang-tools/gen_compile_commands.py.
$(AR) mPi needs 'T' to workaround the llvm-ar bug. The fix was suggested by Nathan Chancellor [1].
[1]: https://lore.kernel.org/llvm/[email protected]/
Signed-off-by: Masahiro Yamada <[email protected]> Tested-by: Nick Desaulniers <[email protected]> Reviewed-by: Nicolas Schier <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5 |
|
| #
3fc24ef3 |
| 01-Jul-2022 |
Ard Biesheuvel <[email protected]> |
arm64: compat: Implement misalignment fixups for multiword loads
The 32-bit ARM kernel implements fixups on behalf of user space when using LDM/STM or LDRD/STRD instructions on addresses that are no
arm64: compat: Implement misalignment fixups for multiword loads
The 32-bit ARM kernel implements fixups on behalf of user space when using LDM/STM or LDRD/STRD instructions on addresses that are not 32-bit aligned. This is not something that is supported by the architecture, but was done anyway to increase compatibility with user space software, which mostly targeted x86 at the time and did not care about aligned accesses.
This feature is one of the remaining impediments to being able to switch to 64-bit kernels on 64-bit capable hardware running 32-bit user space, so let's implement it for the arm64 compat layer as well.
Note that the intent is to implement the exact same handling of misaligned multi-word loads and stores as the 32-bit kernel does, including what appears to be missing support for user space programs that rely on SETEND to switch to a different byte order and back. Also, like the 32-bit ARM version, we rely on the faulting address reported by the CPU to infer the memory address, instead of decoding the instruction fully to obtain this information.
This implementation is taken from the 32-bit ARM tree, with all pieces removed that deal with instructions other than LDRD/STRD and LDM/STM, or that deal with alignment exceptions taken in kernel mode.
Cc: [email protected] Cc: Vagrant Cascadian <[email protected]> Cc: Riku Voipio <[email protected]> Cc: Steve McIntyre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] [[email protected]: change the option to 'default n'] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc4 |
|
| #
aacd149b |
| 24-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: head: avoid relocating the kernel twice for KASLR
Currently, when KASLR is in effect, we set up the kernel virtual address space twice: the first time, the KASLR seed is looked up in the devi
arm64: head: avoid relocating the kernel twice for KASLR
Currently, when KASLR is in effect, we set up the kernel virtual address space twice: the first time, the KASLR seed is looked up in the device tree, and the kernel virtual mapping is torn down and recreated again, after which the relocations are applied a second time. The latter step means that statically initialized global pointer variables will be reset to their initial values, and to ensure that BSS variables are not set to values based on the initial translation, they are cleared again as well.
All of this is needed because we need the command line (taken from the DT) to tell us whether or not to randomize the virtual address space before entering the kernel proper. However, this code has expanded little by little and now creates global state unrelated to the virtual randomization of the kernel before the mapping is torn down and set up again, and the BSS cleared for a second time. This has created some issues in the past, and it would be better to avoid this little dance if possible.
So instead, let's use the temporary mapping of the device tree, and execute the bare minimum of code to decide whether or not KASLR should be enabled, and what the seed is. Only then, create the virtual kernel mapping, clear BSS, etc and proceed as normal. This avoids the issues around inconsistent global state due to BSS being cleared twice, and is generally more maintainable, as it permits us to defer all the remaining DT parsing and KASLR initialization to a later time.
This means the relocation fixup code runs only a single time as well, allowing us to simplify the RELR handling code too, which is not idempotent and was therefore required to keep track of the offset that was applied the first time around.
Note that this means we have to clone a pair of FDT library objects, so that we can control how they are built - we need the stack protector and other instrumentation disabled so that the code can tolerate being called this early. Note that only the kernel page tables and the temporary stack are mapped read-write at this point, which ensures that the early code does not modify any global state inadvertently.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc3, v5.19-rc2, v5.19-rc1 |
|
| #
802b9111 |
| 23-May-2022 |
Andrey Konovalov <[email protected]> |
arm64: kasan: do not instrument stacktrace.c
Disable KASAN instrumentation of arch/arm64/kernel/stacktrace.c.
This speeds up Generic KASAN by 5-20%.
As a side-effect, KASAN is now unable to detect
arm64: kasan: do not instrument stacktrace.c
Disable KASAN instrumentation of arch/arm64/kernel/stacktrace.c.
This speeds up Generic KASAN by 5-20%.
As a side-effect, KASAN is now unable to detect bugs in the stack trace collection code. This is taken as an acceptable downside.
Also replace READ_ONCE_NOCHECK() with READ_ONCE() in stacktrace.c. As the file is now not instrumented, there is no need to use the NOCHECK version of READ_ONCE().
Suggested-by: Mark Rutland <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Andrey Konovalov <[email protected]> Link: https://lore.kernel.org/r/c4c944a2a905e949760fbeb29258185087171708.1653317461.git.andreyknvl@google.com Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.18, v5.18-rc7 |
|
| #
205f3991 |
| 10-May-2022 |
Joey Gouly <[email protected]> |
arm64: vdso: fix makefile dependency on vdso.so
There is currently no dependency for vdso*-wrap.S on vdso*.so, which means that you can get a build that uses a stale vdso*-wrap.o.
In commit a5b8ca9
arm64: vdso: fix makefile dependency on vdso.so
There is currently no dependency for vdso*-wrap.S on vdso*.so, which means that you can get a build that uses a stale vdso*-wrap.o.
In commit a5b8ca97fbf8, the file that includes the vdso.so was moved and renamed from arch/arm64/kernel/vdso/vdso.S to arch/arm64/kernel/vdso-wrap.S, when this happened the Makefile was not updated to force the dependcy on vdso.so.
Fixes: a5b8ca97fbf8 ("arm64: do not descend to vdso directories twice") Signed-off-by: Joey Gouly <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3 |
|
| #
6dd8b1a0 |
| 31-Jan-2022 |
Catalin Marinas <[email protected]> |
arm64: mte: Dump the MTE tags in the core file
For each vma mapped with PROT_MTE (the VM_MTE flag set), generate a PT_ARM_MEMTAG_MTE segment in the core file and dump the corresponding tags. The in-
arm64: mte: Dump the MTE tags in the core file
For each vma mapped with PROT_MTE (the VM_MTE flag set), generate a PT_ARM_MEMTAG_MTE segment in the core file and dump the corresponding tags. The in-file size for such segments is 128 bytes per page.
For pages in a VM_MTE vma which are not present in the user page tables or don't have the PG_mte_tagged flag set (e.g. execute-only), just write zeros in the core file.
An example of program headers for two vmas, one 2-page, the other 4-page long:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align ... LOAD 0x030000 0x0000ffff80034000 0x0000000000000000 0x000000 0x002000 RW 0x1000 LOAD 0x030000 0x0000ffff80036000 0x0000000000000000 0x004000 0x004000 RW 0x1000 ... LOPROC+0x1 0x05b000 0x0000ffff80034000 0x0000000000000000 0x000100 0x002000 0 LOPROC+0x1 0x05b100 0x0000ffff80036000 0x0000000000000000 0x000200 0x004000 0
Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Luis Machado <[email protected]> Reviewed-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6 |
|
| #
8212f898 |
| 13-Oct-2021 |
Masahiro Yamada <[email protected]> |
kbuild: use more subdir- for visiting subdirectories while cleaning
Documentation/kbuild/makefiles.rst suggests to use "archclean" for cleaning arch/$(SRCARCH)/boot/, but it is not a hard requiremen
kbuild: use more subdir- for visiting subdirectories while cleaning
Documentation/kbuild/makefiles.rst suggests to use "archclean" for cleaning arch/$(SRCARCH)/boot/, but it is not a hard requirement.
Since commit d92cc4d51643 ("kbuild: require all architectures to have arch/$(SRCARCH)/Kbuild"), we can use the "subdir- += boot" trick for all architectures. This can take advantage of the parallel option (-j) for "make clean".
I also cleaned up the comments in arch/$(SRCARCH)/Makefile. The "archdep" target no longer exists.
Signed-off-by: Masahiro Yamada <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Michael Ellerman <[email protected]> (powerpc)
show more ...
|
|
Revision tags: v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2 |
|
| #
e6f85cbe |
| 15-Jul-2021 |
Mark Rutland <[email protected]> |
arm64: entry: fix KCOV suppression
We suppress KCOV for entry.o rather than entry-common.o. As entry.o is built from entry.S, this is pointless, and permits instrumentation of entry-common.o, which
arm64: entry: fix KCOV suppression
We suppress KCOV for entry.o rather than entry-common.o. As entry.o is built from entry.S, this is pointless, and permits instrumentation of entry-common.o, which is built from entry-common.c.
Fix the Makefile to suppress KCOV for entry-common.o, as we had intended to begin with. I've verified with objdump that this is working as expected.
Fixes: bf6fa2c0dda7 ("arm64: entry: don't instrument entry code with KCOV") Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: James Morse <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6 |
|
| #
b5df5b83 |
| 07-Jun-2021 |
Mark Rutland <[email protected]> |
arm64: idle: don't instrument idle code with KCOV
The low-level idle code in arch_cpu_idle() and its callees runs at a time where where portions of the kernel environment aren't available. For examp
arm64: idle: don't instrument idle code with KCOV
The low-level idle code in arch_cpu_idle() and its callees runs at a time where where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code.
We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by factoring these functions into a new idle.c so that we can disable KCOV for the entire compilation unit, as is done for the core idle code in kernel/sched/idle.c.
We'd like to keep instrumentation of the rest of process.c, and for the existing code in cpuidle.c, so a new compilation unit is preferable. The arch_cpu_idle_dead() function in process.c is a cpu hotplug function that is safe to instrument, so it is left as-is in process.c.
Signed-off-by: Mark Rutland <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Marc Zyngier <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
bf6fa2c0 |
| 07-Jun-2021 |
Mark Rutland <[email protected]> |
arm64: entry: don't instrument entry code with KCOV
The code in entry-common.c runs at exception entry and return boundaries, where portions of the kernel environment aren't available. For example,
arm64: entry: don't instrument entry code with KCOV
The code in entry-common.c runs at exception entry and return boundaries, where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code.
We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by disabling KCOV for the entire compilation unit.
Signed-off-by: Mark Rutland <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Marc Zyngier <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2 |
|
| #
72fd7236 |
| 03-Mar-2021 |
Julien Thierry <[email protected]> |
arm64: Move instruction encoder/decoder under lib/
Aarch64 instruction set encoding and decoding logic can prove useful for some features/tools both part of the kernel and outside the kernel.
Isola
arm64: Move instruction encoder/decoder under lib/
Aarch64 instruction set encoding and decoding logic can prove useful for some features/tools both part of the kernel and outside the kernel.
Isolate the function dealing only with encoding/decoding instructions, with minimal dependency on kernel utilities in order to be able to reuse that code.
Code was only moved, no code should have been added, removed nor modifier.
Signed-off-by: Julien Thierry <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|