|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2 |
|
| #
ca0f4fe7 |
| 06-Feb-2025 |
Nathan Chancellor <[email protected]> |
arm64: Handle .ARM.attributes section in linker scripts
A recent LLVM commit [1] started generating an .ARM.attributes section similar to the one that exists for 32-bit, which results in orphan sect
arm64: Handle .ARM.attributes section in linker scripts
A recent LLVM commit [1] started generating an .ARM.attributes section similar to the one that exists for 32-bit, which results in orphan section warnings (or errors if CONFIG_WERROR is enabled) from the linker because it is not handled in the arm64 linker scripts.
ld.lld: error: arch/arm64/kernel/vdso/vgettimeofday.o:(.ARM.attributes) is being placed in '.ARM.attributes' ld.lld: error: arch/arm64/kernel/vdso/vgetrandom.o:(.ARM.attributes) is being placed in '.ARM.attributes'
ld.lld: error: vmlinux.a(lib/vsprintf.o):(.ARM.attributes) is being placed in '.ARM.attributes' ld.lld: error: vmlinux.a(lib/win_minmax.o):(.ARM.attributes) is being placed in '.ARM.attributes' ld.lld: error: vmlinux.a(lib/xarray.o):(.ARM.attributes) is being placed in '.ARM.attributes'
Discard the new sections in the necessary linker scripts to resolve the warnings, as the kernel and vDSO do not need to retain it, similar to the .note.gnu.property section.
Cc: [email protected] Fixes: b3e5d80d0c48 ("arm64/build: Warn on orphan section placement") Link: https://github.com/llvm/llvm-project/commit/ee99c4d4845db66c4daa2373352133f4b237c942 [1] Signed-off-by: Nathan Chancellor <[email protected]> Link: https://lore.kernel.org/r/20250206-arm64-handle-arm-attributes-in-linker-script-v3-1-d53d169913eb@kernel.org Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7 |
|
| #
340fd66c |
| 06-Nov-2024 |
Masahiro Yamada <[email protected]> |
arm64: fix .data.rel.ro size assertion when CONFIG_LTO_CLANG
Commit be2881824ae9 ("arm64/build: Assert for unwanted sections") introduced an assertion to ensure that the .data.rel.ro section does no
arm64: fix .data.rel.ro size assertion when CONFIG_LTO_CLANG
Commit be2881824ae9 ("arm64/build: Assert for unwanted sections") introduced an assertion to ensure that the .data.rel.ro section does not exist.
However, this check does not work when CONFIG_LTO_CLANG is enabled, because .data.rel.ro matches the .data.[0-9a-zA-Z_]* pattern in the DATA_MAIN macro.
Move the ASSERT() above the RW_DATA() line.
Fixes: be2881824ae9 ("arm64/build: Assert for unwanted sections") Signed-off-by: Masahiro Yamada <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2 |
|
| #
92a10d38 |
| 30-Jul-2024 |
Jann Horn <[email protected]> |
runtime constants: move list of constants to vmlinux.lds.h
Refactor the list of constant variables into a macro. This should make it easier to add more constants in the future.
Signed-off-by: Jann
runtime constants: move list of constants to vmlinux.lds.h
Refactor the list of constant variables into a macro. This should make it easier to add more constants in the future.
Signed-off-by: Jann Horn <[email protected]> Reviewed-by: Alexander Gordeev <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3 |
|
| #
94a2bc0f |
| 08-Jun-2024 |
Linus Torvalds <[email protected]> |
arm64: add 'runtime constant' support
This implements the runtime constant infrastructure for arm64, allowing the dcache d_hash() function to be generated using as a constant for hash table address
arm64: add 'runtime constant' support
This implements the runtime constant infrastructure for arm64, allowing the dcache d_hash() function to be generated using as a constant for hash table address followed by shift by a constant of the hash index.
[ Fixed up to deal with the big-endian case as per Mark Rutland ]
Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5 |
|
| #
97a6f43b |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: Move early kernel mapping routines into C code
The asm version of the kernel mapping code works fine for creating a coarse grained identity map, but for mapping the kernel down to its e
arm64: head: Move early kernel mapping routines into C code
The asm version of the kernel mapping code works fine for creating a coarse grained identity map, but for mapping the kernel down to its exact boundaries with the right attributes, it is not suitable. This is why we create a preliminary RWX kernel mapping first, and then rebuild it from scratch later on.
So let's reimplement this in C, in a way that will make it unnecessary to create the kernel page tables yet another time in paging_init().
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
dcfe969a |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: Run feature override detection before mapping the kernel
To permit the feature overrides to be taken into account before the KASLR init code runs and the kernel mapping is created, move
arm64: head: Run feature override detection before mapping the kernel
To permit the feature overrides to be taken into account before the KASLR init code runs and the kernel mapping is created, move the detection code to an earlier stage in the boot.
In a subsequent patch, this will be taken advantage of by merging the preliminary and permanent mappings of the kernel text and data into a single one that gets created and relocated before start_kernel() is called.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
aa99aad7 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: Clear BSS and the kernel page tables in one go
We will move the CPU feature overrides into BSS in a subsequent patch, and this requires that BSS is zeroed before the feature override de
arm64: head: Clear BSS and the kernel page tables in one go
We will move the CPU feature overrides into BSS in a subsequent patch, and this requires that BSS is zeroed before the feature override detection code runs. So let's map BSS read-write in the ID map, and zero it via this mapping.
Since the kernel page tables are right next to it, and also zeroed via the ID map, let's drop the separate clear_page_tables() function, and just zero everything in one go.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
734958ef |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: move relocation handling to C code
Now that we have a mini C runtime before the kernel mapping is up, we can move the non-trivial relocation processing code out of head.S and reimplemen
arm64: head: move relocation handling to C code
Now that we have a mini C runtime before the kernel mapping is up, we can move the non-trivial relocation processing code out of head.S and reimplement it in C.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1 |
|
| #
0fddb79b |
| 02-May-2023 |
Fangrui Song <[email protected]> |
arm64: lds: move .got section out of .text
Currently, the .got section is placed within the output section .text. However, when .got is non-empty, the SHF_WRITE flag is set for .text when linked by
arm64: lds: move .got section out of .text
Currently, the .got section is placed within the output section .text. However, when .got is non-empty, the SHF_WRITE flag is set for .text when linked by lld. GNU ld recognizes .text as a special section and ignores the SHF_WRITE flag. By renaming .text, we can also get the SHF_WRITE flag.
The kernel has performed R_AARCH64_RELATIVE resolving very early, and can then assume that .got is read-only. Let's move .got to the vmlinux_rodata pseudo-segment.
As Ard Biesheuvel notes:
"This matters to consumers of the vmlinux ELF representation of the kernel image, such as syzkaller, which disregards writable PT_LOAD segments when resolving code symbols. The kernel itself does not care about this distinction, but given that the GOT contains data and not code, it does not require executable permissions, and therefore does not belong in .text to begin with."
Reviewed-by: Ard Biesheuvel <[email protected]> Signed-off-by: Fangrui Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6 |
|
| #
dc4824fa |
| 27-Jan-2023 |
Mark Rutland <[email protected]> |
arm64: avoid executing padding bytes during kexec / hibernation
Currently we rely on the HIBERNATE_TEXT section starting with the entry point to swsusp_arch_suspend_exit, and the KEXEC_TEXT section
arm64: avoid executing padding bytes during kexec / hibernation
Currently we rely on the HIBERNATE_TEXT section starting with the entry point to swsusp_arch_suspend_exit, and the KEXEC_TEXT section starting with the entry point to arm64_relocate_new_kernel. In both cases we copy the entire section into a dynamically-allocated page, and then later branch to the start of this page.
SYM_FUNC_START() will align the function entry points to CONFIG_FUNCTION_ALIGNMENT, and when the linker later processes the assembled code it will place padding bytes before the function entry point if the location counter was not already sufficiently aligned. The linker happens to use the value zero for these padding bytes.
This padding may end up being applied whenever CONFIG_FUNCTION_ALIGNMENT is greater than 4, which can be the case with CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B=y or CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS=y.
When such padding is applied, attempting to kexec or resume from hibernate will result ina crash: the kernel will branch to the padding bytes as the start of the dynamically-allocated page, and as those bytes are zero they will decode as UDF #0, which reliably triggers an UNDEFINED exception. For example:
| # ./kexec --reuse-cmdline -f Image | [ 46.965800] kexec_core: Starting new kernel | [ 47.143641] psci: CPU1 killed (polled 0 ms) | [ 47.233653] psci: CPU2 killed (polled 0 ms) | [ 47.323465] psci: CPU3 killed (polled 0 ms) | [ 47.324776] Bye! | [ 47.327072] Internal error: Oops - Undefined instruction: 0000000002000000 [#1] SMP | [ 47.328510] Modules linked in: | [ 47.329086] CPU: 0 PID: 259 Comm: kexec Not tainted 6.2.0-rc5+ #3 | [ 47.330223] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 | [ 47.331497] pstate: 604003c5 (nZCv DAIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | [ 47.332782] pc : 0x43a95000 | [ 47.333338] lr : machine_kexec+0x190/0x1e0 | [ 47.334169] sp : ffff80000d293b70 | [ 47.334845] x29: ffff80000d293b70 x28: ffff000002cc0000 x27: 0000000000000000 | [ 47.336292] x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 | [ 47.337744] x23: ffff80000a837858 x22: 0000000048ec9000 x21: 0000000000000010 | [ 47.339192] x20: 00000000adc83000 x19: ffff000000827000 x18: 0000000000000006 | [ 47.340638] x17: ffff800075a61000 x16: ffff800008000000 x15: ffff80000d293658 | [ 47.342085] x14: 0000000000000000 x13: ffff80000d2937f7 x12: ffff80000a7ff6e0 | [ 47.343530] x11: 00000000ffffdfff x10: ffff80000a8ef8e0 x9 : ffff80000813ef00 | [ 47.344976] x8 : 000000000002ffe8 x7 : c0000000ffffdfff x6 : 00000000000affa8 | [ 47.346431] x5 : 0000000000001fff x4 : 0000000000000001 x3 : ffff80000a0a3008 | [ 47.347877] x2 : ffff80000a8220f8 x1 : 0000000043a95000 x0 : ffff000000827000 | [ 47.349334] Call trace: | [ 47.349834] 0x43a95000 | [ 47.350338] kernel_kexec+0x88/0x100 | [ 47.351070] __do_sys_reboot+0x108/0x268 | [ 47.351873] __arm64_sys_reboot+0x2c/0x40 | [ 47.352689] invoke_syscall+0x78/0x108 | [ 47.353458] el0_svc_common.constprop.0+0x4c/0x100 | [ 47.354426] do_el0_svc+0x34/0x50 | [ 47.355102] el0_svc+0x34/0x108 | [ 47.355747] el0t_64_sync_handler+0xf4/0x120 | [ 47.356617] el0t_64_sync+0x194/0x198 | [ 47.357374] Code: bad PC value | [ 47.357999] ---[ end trace 0000000000000000 ]--- | [ 47.358937] Kernel panic - not syncing: Oops - Undefined instruction: Fatal exception | [ 47.360515] Kernel Offset: disabled | [ 47.361230] CPU features: 0x002000,00050108,c8004203 | [ 47.362232] Memory Limit: none
Note: Unfortunately the code dump reports "bad PC value" as it attempts to dump some instructions prior to the UDF (i.e. before the start of the page), and terminates early upon a fault, obscuring the problem.
This patch fixes this issue by aligning the section starter markes to CONFIG_FUNCTION_ALIGNMENT using the ALIGN_FUNCTION() helper, which ensures that the linker never needs to place padding bytes within the section. Assertions are added to verify each section begins with the function we expect, making our implicit requirement explicit.
In future it might be nice to rework the kexec and hibernation code to decouple the section start from the entry point, but that involves much more significant changes that come with a higher risk of error, so I've tried to keep this fix as simple as possible for now.
Fixes: 47a15aa54427 ("arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT") Reported-by: CKI Project <[email protected]> Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/ Signed-off-by: Mark Rutland <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc5, v6.2-rc4 |
|
| #
af7249b3 |
| 11-Jan-2023 |
Ard Biesheuvel <[email protected]> |
arm64: kernel: move identity map out of .text mapping
Reorganize the ID map slightly so that only code that is executed with the MMU off or via the 1:1 mapping remains. This allows us to move the id
arm64: kernel: move identity map out of .text mapping
Reorganize the ID map slightly so that only code that is executed with the MMU off or via the 1:1 mapping remains. This allows us to move the identity map out of the .text segment, as it will no longer need executable permissions via the kernel mapping.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
2b5a0e42 |
| 12-Jan-2023 |
Peter Zijlstra <[email protected]> |
objtool/idle: Validate __cpuidle code as noinstr
Idle code is very like entry code in that RCU isn't available. As such, add a little validation.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infra
objtool/idle: Validate __cpuidle code as noinstr
Idle code is very like entry code in that RCU isn't available. As such, add a little validation.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Tony Lindgren <[email protected]> Tested-by: Ulf Hansson <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3 |
|
| #
68c76ad4 |
| 27-Oct-2022 |
Ard Biesheuvel <[email protected]> |
arm64: unwind: add asynchronous unwind tables to kernel and modules
Enable asynchronous unwind table generation for both the core kernel as well as modules, and emit the resulting .eh_frame sections
arm64: unwind: add asynchronous unwind tables to kernel and modules
Enable asynchronous unwind table generation for both the core kernel as well as modules, and emit the resulting .eh_frame sections as init code so we can use the unwind directives for code patching at boot or module load time.
This will be used by dynamic shadow call stack support, which will rely on code patching rather than compiler codegen to emit the shadow call stack push and pop instructions.
Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Tested-by: Sami Tolvanen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4 |
|
| #
d7bea550 |
| 24-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: head: use relative references to the RELA and RELR tables
Formerly, we had to access the RELA and RELR tables via the kernel mapping that was being relocated, and so deriving the start and en
arm64: head: use relative references to the RELA and RELR tables
Formerly, we had to access the RELA and RELR tables via the kernel mapping that was being relocated, and so deriving the start and end addresses using ADRP/ADD references was not possible, as the relocation code runs from the ID map.
Now that we map the entire kernel image via the ID map, we can simplify this, and just load the entries via the ID map as well.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
c3cee924 |
| 24-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: head: cover entire kernel image in initial ID map
As a first step towards avoiding the need to create, tear down and recreate the kernel virtual mapping with MMU and caches disabled, start by
arm64: head: cover entire kernel image in initial ID map
As a first step towards avoiding the need to create, tear down and recreate the kernel virtual mapping with MMU and caches disabled, start by expanding the ID map so it covers the page tables as well as all executable code. This will allow us to populate the page tables with the MMU and caches on, and call KASLR init code before setting up the virtual mapping.
Since this ID map is only needed at boot, create it as a temporary set of page tables, and populate the permanent ID map after enabling the MMU and caches. While at it, switch to read-only attributes for the where possible, as writable permissions are only needed for the initial kernel page tables. Note that on 4k granule configurations, the permanent ID map will now be reduced to a single page rather than a 2M block mapping.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
1c9a8e87 |
| 22-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: entry: simplify trampoline data page
Get rid of some clunky open coded arithmetic on section addresses, by emitting the trampoline data variables into a separate, dedicated r/o data section,
arm64: entry: simplify trampoline data page
Get rid of some clunky open coded arithmetic on section addresses, by emitting the trampoline data variables into a separate, dedicated r/o data section, and putting it at the next page boundary. This way, we can access the literals via single LDR instruction.
While at it, get rid of other, implicit literals, and use ADRP/ADD or MOVZ/MOVK sequences, as appropriate. Note that the latter are only supported for CONFIG_RELOCATABLE=n (which is usually the case if CONFIG_RANDOMIZE_BASE=n), so update the CPP conditionals to reflect this.
Acked-by: Mark Rutland <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5 |
|
| #
6ee3cf6a |
| 29-Apr-2022 |
Ard Biesheuvel <[email protected]> |
arm64: lds: move special code sections out of kernel exec segment
There are a few code sections that are emitted into the kernel's executable .text segment simply because they contain code, but are
arm64: lds: move special code sections out of kernel exec segment
There are a few code sections that are emitted into the kernel's executable .text segment simply because they contain code, but are actually never executed via this mapping, so they can happily live in a region that gets mapped without executable permissions, reducing the risk of being gadgetized.
Note that the kexec and hibernate region contents are always copied into a fresh page, and so there is no need to align them as long as the overall size of each is below 4 KiB.
Signed-off-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2 |
|
| #
a9c406e6 |
| 18-Nov-2021 |
James Morse <[email protected]> |
arm64: entry: Allow the trampoline text to occupy multiple pages
Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page.
Allow the trampoline text to occupy u
arm64: entry: Allow the trampoline text to occupy multiple pages
Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page.
Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page.
Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: James Morse <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc1, v5.15, v5.15-rc7 |
|
| #
bf6e667f |
| 19-Oct-2021 |
Mark Rutland <[email protected]> |
arm64: vmlinux.lds.S: remove `.fixup` section
We no longer place anything into a `.fixup` section, so we no longer need to place those sections into the `.text` section in the main kernel Image.
Re
arm64: vmlinux.lds.S: remove `.fixup` section
We no longer place anything into a `.fixup` section, so we no longer need to place those sections into the `.text` section in the main kernel Image.
Remove the use of `.fixup`.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: James Morse <[email protected]> Cc: Robin Murphy <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
d6e2cc56 |
| 19-Oct-2021 |
Mark Rutland <[email protected]> |
arm64: extable: add `type` and `data` fields
Subsequent patches will add specialized handlers for fixups, in addition to the simple PC fixup and BPF handlers we have today. In preparation, this patc
arm64: extable: add `type` and `data` fields
Subsequent patches will add specialized handlers for fixups, in addition to the simple PC fixup and BPF handlers we have today. In preparation, this patch adds a new `type` field to struct exception_table_entry, and uses this to distinguish the fixup and BPF cases. A `data` field is also added so that subsequent patches can associate data specific to each exception site (e.g. register numbers).
Handlers are named ex_handler_*() for consistency, following the exmaple of x86. At the same time, get_ex_fixup() is split out into a helper so that it can be used by other ex_handler_*() functions ins subsequent patches.
This patch will increase the size of the exception tables, which will be remedied by subsequent patches removing redundant fixup code. There should be no functional change as a result of this patch.
Since each entry is now 12 bytes in size, we must reduce the alignment of each entry from `.align 3` (i.e. 8 bytes) to `.align 2` (i.e. 4 bytes), which is the natrual alignment of the `insn` and `fixup` fields. The current 8-byte alignment is a holdover from when the `insn` and `fixup` fields was 8 bytes, and while not harmful has not been necessary since commit:
6c94f27ac847ff8e ("arm64: switch to relative exception tables")
Similarly, RO_EXCEPTION_TABLE_ALIGN is dropped to 4 bytes.
Concurrently with this patch, x86's exception table entry format is being updated (similarly to a 12-byte format, with 32-bytes of absolute data). Once both have been merged it should be possible to unify the sorttable logic for the two.
Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: James Morse <[email protected]> Cc: Jean-Philippe Brucker <[email protected]> Cc: Robin Murphy <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4 |
|
| #
19a046f0 |
| 30-Sep-2021 |
Pasha Tatashin <[email protected]> |
arm64: kexec: use ld script for relocation function
Currently, relocation code declares start and end variables which are used to compute its size.
The better way to do this is to use ld script, an
arm64: kexec: use ld script for relocation function
Currently, relocation code declares start and end variables which are used to compute its size.
The better way to do this is to use ld script, and put relocation function in its own section.
Signed-off-by: Pasha Tatashin <[email protected]> Acked-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5 |
|
| #
eb48d154 |
| 02-Aug-2021 |
Marc Zyngier <[email protected]> |
arm64: Move .hyp.rodata outside of the _sdata.._edata range
The HYP rodata section is currently lumped together with the BSS, which isn't exactly what is expected (it gets registered with kmemleak,
arm64: Move .hyp.rodata outside of the _sdata.._edata range
The HYP rodata section is currently lumped together with the BSS, which isn't exactly what is expected (it gets registered with kmemleak, for example).
Move it away so that it is actually marked RO. As an added benefit, it isn't registered with kmemleak anymore.
Fixes: 380e18ade4a5 ("KVM: arm64: Introduce a BSS section for use at Hyp") Suggested-by: Catalin Marinas <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: [email protected] #5.13 Acked-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4 |
|
| #
b83042f0 |
| 19-Mar-2021 |
Quentin Perret <[email protected]> |
KVM: arm64: Page-align the .hyp sections
We will soon unmap the .hyp sections from the host stage 2 in Protected nVHE mode, which obviously works with at least page granularity, so make sure to alig
KVM: arm64: Page-align the .hyp sections
We will soon unmap the .hyp sections from the host stage 2 in Protected nVHE mode, which obviously works with at least page granularity, so make sure to align them correctly.
Acked-by: Will Deacon <[email protected]> Signed-off-by: Quentin Perret <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
380e18ad |
| 19-Mar-2021 |
Quentin Perret <[email protected]> |
KVM: arm64: Introduce a BSS section for use at Hyp
Currently, the hyp code cannot make full use of a bss, as the kernel section is mapped read-only.
While this mapping could simply be changed to re
KVM: arm64: Introduce a BSS section for use at Hyp
Currently, the hyp code cannot make full use of a bss, as the kernel section is mapped read-only.
While this mapping could simply be changed to read-write, it would intermingle even more the hyp and kernel state than they currently are. Instead, introduce a __hyp_bss section, that uses reserved pages, and create the appropriate RW hyp mappings during KVM init.
Acked-by: Will Deacon <[email protected]> Signed-off-by: Quentin Perret <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7 |
|
| #
0188a894 |
| 02-Feb-2021 |
Joey Gouly <[email protected]> |
arm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset
Add TRAMP_SWAPPER_OFFSET and use that instead of hardcoding the offset between swapper_pg_dir and tramp_pg_dir.
Then use TRAMP_SWAPPER_OFF
arm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset
Add TRAMP_SWAPPER_OFFSET and use that instead of hardcoding the offset between swapper_pg_dir and tramp_pg_dir.
Then use TRAMP_SWAPPER_OFFSET to assert that the offset is correct at link time.
Signed-off-by: Joey Gouly <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Tested-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|