|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3 |
|
| #
9c46929e |
| 24-Nov-2021 |
Ard Biesheuvel <[email protected]> |
ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we c
ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we can also enable THREAD_INFO_IN_TASK for those systems, as in that case, thread_info is accessed via current rather than the other way around, removing the need to store thread_info at the base of the task stack. This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP systems as well.
To partially mitigate the performance overhead of this arrangement, use a ADD/ADD/LDR sequence with the appropriate PC-relative group relocations to load the value of current when needed. This means that accessing current will still only require a single load as before, avoiding the need for a literal to carry the address of the global variable in each function. However, accessing thread_info will now require this load as well.
Acked-by: Linus Walleij <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Tested-by: Marc Zyngier <[email protected]> Tested-by: Vladimir Murzin <[email protected]> # ARMv7M
show more ...
|
|
Revision tags: v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2 |
|
| #
50596b75 |
| 18-Sep-2021 |
Ard Biesheuvel <[email protected]> |
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while ru
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user space, we can use it to keep the 'current' pointer while running in the kernel. This removes the need to access it via thread_info, which is located at the base of the stack, but will be moved out of there in a subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will help GCC understand that reloading the value within the same function is not necessary, even when using the per-task stack protector (which also generates accesses via the TLS register). For example, the generated code below loads TPIDRURO only once, and uses it to access both the stack canary and the preempt_count fields.
<do_one_initcall>: e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr} ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3} 4606 mov r6, r0 b094 sub sp, #80 ; 0x50 f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary 9313 str r3, [sp, #76] ; 0x4c f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <[email protected]> Signed-off-by: Keith Packard <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6 |
|
| #
62c4a2e2 |
| 14-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: head-common.S: use PC-relative insn sequence for __proc_info
Replace the open coded PC relative offset calculations with a pair of adr_l invocations. This removes some open coded arithmetic inv
ARM: head-common.S: use PC-relative insn sequence for __proc_info
Replace the open coded PC relative offset calculations with a pair of adr_l invocations. This removes some open coded arithmetic involving virtual addresses, avoids literal pools on v7+, and slightly reduces the footprint of the code.
Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
5615f69b |
| 25-Oct-2020 |
Linus Walleij <[email protected]> |
ARM: 9016/2: Initialize the mapping of KASan shadow memory
This patch initializes KASan shadow region's page table and memory. There are two stage for KASan initializing:
1. At early boot stage the
ARM: 9016/2: Initialize the mapping of KASan shadow memory
This patch initializes KASan shadow region's page table and memory. There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just one physical page (kasan_zero_page). It is finished by the function kasan_early_init which is called by __mmap_switched(arch/arm/kernel/ head-common.S)
2. After the calling of paging_init, we use kasan_zero_page as zero shadow for some memory that KASan does not need to track, and we allocate a new shadow space for the other memory that KASan need to track. These issues are finished by the function kasan_init which is call by setup_arch.
When using KASan we also need to increase the THREAD_SIZE_ORDER from 1 to 2 as the extra calls for shadow memory uses quite a bit of stack.
As we need to make a temporary copy of the PGD when setting up shadow memory we create a helpful PGD_SIZE definition for both LPAE and non-LPAE setups.
The KASan core code unconditionally calls pud_populate() so this needs to be changed from BUG() to do {} while (0) when building with KASan enabled.
After the initial development by Andre Ryabinin several modifications have been made to this code:
Abbott Liu <[email protected]> - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's mapping table need be copied in the pgd_alloc() function. - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate, kasan_pgd_populate from .meminit.text section to .init.text section. Reported by Florian Fainelli <[email protected]>
Linus Walleij <[email protected]>: - Drop the custom mainpulation of TTBR0 and just use cpu_switch_mm() to switch the pgd table. - Adopt to handle 4th level page tabel folding. - Rewrite the entire page directory and page entry initialization sequence to be recursive based on ARM64:s kasan_init.c.
Ard Biesheuvel <[email protected]>: - Necessary underlying fixes. - Crucial bug fixes to the memory set-up code.
Co-developed-by: Andrey Ryabinin <[email protected]> Co-developed-by: Abbott Liu <[email protected]> Co-developed-by: Ard Biesheuvel <[email protected]>
Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Acked-by: Mike Rapoport <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <[email protected]> # Brahma SoCs Tested-by: Ahmad Fatoum <[email protected]> # i.MX6Q Reported-by: Russell King - ARM Linux <[email protected]> Reported-by: Florian Fainelli <[email protected]> Signed-off-by: Andrey Ryabinin <[email protected]> Signed-off-by: Abbott Liu <[email protected]> Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Linus Walleij <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
d6d51a96 |
| 25-Oct-2020 |
Linus Walleij <[email protected]> |
ARM: 9014/2: Replace string mem* functions for KASan
Functions like memset()/memmove()/memcpy() do a lot of memory accesses.
If a bad pointer is passed to one of these functions it is important to
ARM: 9014/2: Replace string mem* functions for KASan
Functions like memset()/memmove()/memcpy() do a lot of memory accesses.
If a bad pointer is passed to one of these functions it is important to catch this. Compiler instrumentation cannot do this since these functions are written in assembly.
KASan replaces these memory functions with instrumented variants.
The original functions are declared as weak symbols so that the strong definitions in mm/kasan/kasan.c can replace them.
The original functions have aliases with a '__' prefix in their name, so we can call the non-instrumented variant if needed.
We must use __memcpy()/__memset() in place of memcpy()/memset() when we copy .data to RAM and when we clear .bss, because kasan_early_init cannot be called before the initialization of .data and .bss.
For the kernel compression and EFI libstub's custom string libraries we need a special quirk: even if these are built without KASan enabled, they rely on the global headers for their custom string libraries, which means that e.g. memcpy() will be defined to __memcpy() and we get link failures. Since these implementations are written i C rather than assembly we use e.g. __alias(memcpy) to redirected any users back to the local implementation.
Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: [email protected] Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <[email protected]> # Brahma SoCs Tested-by: Ahmad Fatoum <[email protected]> # i.MX6Q Reported-by: Russell King - ARM Linux <[email protected]> Signed-off-by: Ahmad Fatoum <[email protected]> Signed-off-by: Abbott Liu <[email protected]> Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Linus Walleij <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3 |
|
| #
4c0742f6 |
| 10-Oct-2019 |
Vladimir Murzin <[email protected]> |
ARM: 8914/1: NOMMU: Fix exc_ret for XIP
It was reported that 72cd4064fcca "NOMMU: Toggle only bits in EXC_RETURN we are really care of" breaks NOMMU+XIP combination. It happens because saved EXC_RET
ARM: 8914/1: NOMMU: Fix exc_ret for XIP
It was reported that 72cd4064fcca "NOMMU: Toggle only bits in EXC_RETURN we are really care of" breaks NOMMU+XIP combination. It happens because saved EXC_RETURN gets overwritten when data section is relocated.
The fix is to propagate EXC_RETURN via register and let relocation code to commit that value into memory.
Fixes: 72cd4064fcca ("ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care of") Reported-by: afzal mohammed <[email protected]> Tested-by: afzal mohammed <[email protected]> Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
d2912cb1 |
| 04-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6 |
|
| #
899a42f8 |
| 19-Jul-2018 |
Russell King <[email protected]> |
ARM: make lookup_processor_type() non-__init
Move lookup_processor_type() out of the __init section so it is callable from (eg) the secondary startup code during hotplug.
Reviewed-by: Julien Thierr
ARM: make lookup_processor_type() non-__init
Move lookup_processor_type() out of the __init section so it is callable from (eg) the secondary startup code during hotplug.
Reviewed-by: Julien Thierry <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9 |
|
| #
ff5fdafc |
| 19-Jan-2018 |
Nicolas Pitre <[email protected]> |
ARM: 8745/1: get rid of __memzero()
The __memzero assembly code is almost identical to memset's except for two orr instructions. The runtime performance of __memset(p, n) and memset(p, 0, n) is acco
ARM: 8745/1: get rid of __memzero()
The __memzero assembly code is almost identical to memset's except for two orr instructions. The runtime performance of __memset(p, n) and memset(p, 0, n) is accordingly almost identical.
However, the memset() macro used to guard against a zero length and to call __memzero at compile time when the fill value is a constant zero interferes with compiler optimizations.
Arnd found tha the test against a zero length brings up some new warnings with gcc v8:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82103
And successively rremoving the test against a zero length and the call to __memzero optimization produces the following kernel sizes for defconfig with gcc 6:
text data bss dec hex filename 12248142 6278960 413588 18940690 1210312 vmlinux.orig 12244474 6278960 413588 18937022 120f4be vmlinux.no_zero_test 12239160 6278960 413588 18931708 120dffc vmlinux.no_memzero
So it is probably not worth keeping __memzero around given that the compiler can do a better job at inlining trivial memset(p,0,n) on its own. And the memset code already handles a zero length just fine.
Suggested-by: Arnd Bergmann <[email protected]> Signed-off-by: Nicolas Pitre <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4 |
|
| #
59b6359d |
| 03-Oct-2017 |
Geert Uytterhoeven <[email protected]> |
ARM: 8702/1: head-common.S: Clear lr before jumping to start_kernel()
If CONFIG_DEBUG_LOCK_ALLOC=y, the kernel log is spammed with a few hundred identical messages:
unwind: Unknown symbol addre
ARM: 8702/1: head-common.S: Clear lr before jumping to start_kernel()
If CONFIG_DEBUG_LOCK_ALLOC=y, the kernel log is spammed with a few hundred identical messages:
unwind: Unknown symbol address c0800300 unwind: Index not found c0800300
c0800300 is the return address from the last subroutine call (to __memzero()) in __mmap_switched(). Apparently having this address in the link register confuses the unwinder.
To fix this, reset the link register to zero before jumping to start_kernel().
Fixes: 9520b1a1b5f7a348 ("ARM: head-common.S: speed up startup code") Suggested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Geert Uytterhoeven <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7 |
|
| #
ca8b5d97 |
| 25-Aug-2017 |
Nicolas Pitre <[email protected]> |
ARM: XIP kernel: store .data compressed in ROM
The .data segment stored in ROM is only copied to RAM once at boot time and never referenced afterwards. This is arguably a suboptimal usage of ROM res
ARM: XIP kernel: store .data compressed in ROM
The .data segment stored in ROM is only copied to RAM once at boot time and never referenced afterwards. This is arguably a suboptimal usage of ROM resources.
This patch allows for compressing the .data segment before storing it into ROM and decompressing it to RAM rather than simply copying it, saving on precious ROM space.
Because global data is not available yet (obviously) we must allocate decompressor workspace memory on the stack. The .bss area is used as a stack area for that purpose before it is cleared. The required stack frame is 9568 bytes for __inflate_kernel_data() alone, so make sure the .bss is large enough to cope with that plus extra room for called functions or fail the build.
Those numbers were picked arbitrarily based on the above 9568 byte stack frame:
10240 (2.5 * PAGE_SIZE): used to override -Wframe-larger-than whose default value is 1024. 12288 (3 * PAGE_SIZE): minimum .bss size to contain the stack.
Signed-off-by: Nicolas Pitre <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Chris Brandt <[email protected]>
show more ...
|
| #
9520b1a1 |
| 24-Aug-2017 |
Nicolas Pitre <[email protected]> |
ARM: head-common.S: speed up startup code
Let's use optimized routines such as memcpy to copy .data and memzero to clear .bss in the startup code instead of doing it one word at a time. Those routin
ARM: head-common.S: speed up startup code
Let's use optimized routines such as memcpy to copy .data and memzero to clear .bss in the startup code instead of doing it one word at a time. Those routines don't use any global data so they're safe to use even if .data and .bss segments are not initialized.
In the .data copy case a temporary stack is installed in the .bss area as the actual kernel stack is located within the copied data area. The XIP kernel linker script ensures a 8 byte alignment for that purpose.
Finally, make the .data copy and related pointers surrounded by CONFIG_XIP_KERNEL to make it obvious what it is all about. This will allow for further cleanups in the non-XIP linker script.
Signed-off-by: Nicolas Pitre <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Chris Brandt <[email protected]>
show more ...
|
|
Revision tags: v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6, v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8, v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1, v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7, v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7, v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3, v4.7-rc2, v4.7-rc1, v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5, v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1, v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5, v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6, v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2, v4.1-rc1, v4.0, v4.0-rc7, v4.0-rc6, v4.0-rc5, v4.0-rc4, v4.0-rc3, v4.0-rc2, v4.0-rc1, v3.19, v3.19-rc7, v3.19-rc6, v3.19-rc5, v3.19-rc4, v3.19-rc3, v3.19-rc2, v3.19-rc1, v3.18, v3.18-rc7, v3.18-rc6, v3.18-rc5, v3.18-rc4, v3.18-rc3, v3.18-rc2, v3.18-rc1, v3.17, v3.17-rc7, v3.17-rc6, v3.17-rc5, v3.17-rc4, v3.17-rc3, v3.17-rc2, v3.17-rc1, v3.16, v3.16-rc7, v3.16-rc6, v3.16-rc5, v3.16-rc4 |
|
| #
6ebbf2ce |
| 30-Jun-2014 |
Russell King <[email protected]> |
ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "b
ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1).
We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction.
Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection.
Reported-by: Will Deacon <[email protected]> Tested-by: Stephen Warren <[email protected]> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <[email protected]> # mioa701_bootresume.S Tested-by: Andrew Lunn <[email protected]> # Kirkwood Tested-by: Shawn Guo <[email protected]> Tested-by: Tony Lindgren <[email protected]> # OMAPs Tested-by: Gregory CLEMENT <[email protected]> # Armada XP, 375, 385 Acked-by: Sekhar Nori <[email protected]> # DaVinci Acked-by: Christoffer Dall <[email protected]> # kvm/hyp Acked-by: Haojian Zhuang <[email protected]> # PXA3xx Acked-by: Stefano Stabellini <[email protected]> # Xen Tested-by: Uwe Kleine-König <[email protected]> # ARMv7M Tested-by: Simon Horman <[email protected]> # Shmobile Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v3.16-rc3, v3.16-rc2, v3.16-rc1, v3.15, v3.15-rc8, v3.15-rc7, v3.15-rc6, v3.15-rc5, v3.15-rc4, v3.15-rc3, v3.15-rc2, v3.15-rc1 |
|
| #
0aeb3408 |
| 13-Apr-2014 |
Russell King <[email protected]> |
ARM: remove global cr_no_alignment
cr_no_alignment is really only used by the alignment code. Since we no longer change the setting of cr_alignment after boot, we can localise this to alignment.c
ARM: remove global cr_no_alignment
cr_no_alignment is really only used by the alignment code. Since we no longer change the setting of cr_alignment after boot, we can localise this to alignment.c
Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v3.14, v3.14-rc8, v3.14-rc7, v3.14-rc6, v3.14-rc5, v3.14-rc4 |
|
| #
b3634575 |
| 18-Feb-2014 |
Thomas Petazzoni <[email protected]> |
ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU
Currently, when the kernel is configured with LPAE support, but the CPU doesn't support it, the error message is fairly
ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU
Currently, when the kernel is configured with LPAE support, but the CPU doesn't support it, the error message is fairly cryptic:
Error: unrecognized/unsupported processor variant (0x561f5811).
This messages is normally shown when there is an issue when comparing the processor ID (CP15 0, c0, c0) with the values/masks described in proc-v7.S. However, the same message is displayed when LPAE support is enabled in the kernel configuration, but not available in the CPU, after looking at ID_MMFR0 (CP15 0, c0, c1, 4). Having the same error message is highly misleading.
This commit improves this by showing a different error message when this situation occurs:
Error: Kernel with LPAE support, but CPU does not support LPAE.
Signed-off-by: Thomas Petazzoni <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v3.14-rc3, v3.14-rc2, v3.14-rc1, v3.13, v3.13-rc8, v3.13-rc7, v3.13-rc6, v3.13-rc5, v3.13-rc4, v3.13-rc3, v3.13-rc2, v3.13-rc1, v3.12, v3.12-rc7, v3.12-rc6, v3.12-rc5, v3.12-rc4, v3.12-rc3, v3.12-rc2, v3.12-rc1, v3.11, v3.11-rc7, v3.11-rc6, v3.11-rc5, v3.11-rc4, v3.11-rc3, v3.11-rc2, v3.11-rc1, v3.10, v3.10-rc7 |
|
| #
8bd26e3a |
| 17-Jun-2013 |
Paul Gortmaker <[email protected]> |
arm: delete __cpuinit/__CPUINIT usage from all ARM users
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset
arm: delete __cpuinit/__CPUINIT usage from all ARM users
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Signed-off-by: Paul Gortmaker <[email protected]>
show more ...
|
| #
8c69d7af |
| 05-Jul-2013 |
Stephen Warren <[email protected]> |
ARM: 7780/1: add missing linker section markup to head-common.S
Macro __INIT is used to place various code in head-common.S into the init section. This should be matched by a closing __FINIT. Also,
ARM: 7780/1: add missing linker section markup to head-common.S
Macro __INIT is used to place various code in head-common.S into the init section. This should be matched by a closing __FINIT. Also, add an explicit ".text" to ensure subsequent code is placed into the correct section; __FINIT is simply a closing marker to match __INIT and doesn't guarantee to revert to .text.
This historically caused no problem, because macro __CPUINIT was used at the exact location where __FINIT was missing, which then placed following code into the cpuinit section. However, with commit 22f0a2736 "init.h: remove __cpuinit sections from the kernel" applied, __CPUINIT becomes a no-op, thus leaving all this code in the init section, rather than the regular text section. This caused issues such as secondary CPU boot failures or crashes.
Signed-off-by: Stephen Warren <[email protected]> Acked-by: Paul Gortmaker <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v3.10-rc6, v3.10-rc5, v3.10-rc4, v3.10-rc3, v3.10-rc2, v3.10-rc1, v3.9, v3.9-rc8, v3.9-rc7, v3.9-rc6, v3.9-rc5, v3.9-rc4, v3.9-rc3, v3.9-rc2, v3.9-rc1, v3.8, v3.8-rc7, v3.8-rc6, v3.8-rc5, v3.8-rc4, v3.8-rc3, v3.8-rc2, v3.8-rc1, v3.7, v3.7-rc8, v3.7-rc7, v3.7-rc6, v3.7-rc5, v3.7-rc4, v3.7-rc3, v3.7-rc2, v3.7-rc1, v3.6, v3.6-rc7, v3.6-rc6, v3.6-rc5, v3.6-rc4, v3.6-rc3, v3.6-rc2, v3.6-rc1, v3.5, v3.5-rc7, v3.5-rc6, v3.5-rc5, v3.5-rc4, v3.5-rc3, v3.5-rc2, v3.5-rc1, v3.4, v3.4-rc7, v3.4-rc6, v3.4-rc5, v3.4-rc4, v3.4-rc3, v3.4-rc2, v3.4-rc1, v3.3, v3.3-rc7, v3.3-rc6, v3.3-rc5, v3.3-rc4, v3.3-rc3, v3.3-rc2, v3.3-rc1 |
|
| #
b849a60e |
| 16-Jan-2012 |
Uwe Kleine-König <[email protected]> |
ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15
This makes cr_alignment a constant 0 to break code that tries to modify the value as it's likely that it's built on wrong assumption when CON
ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15
This makes cr_alignment a constant 0 to break code that tries to modify the value as it's likely that it's built on wrong assumption when CONFIG_CPU_CP15 isn't defined. For code that is only reading the value 0 is more or less a fine value to report.
Signed-off-by: Uwe Kleine-König <[email protected]> Message-Id: [email protected] (v8)
show more ...
|
|
Revision tags: v3.2, v3.2-rc7, v3.2-rc6, v3.2-rc5, v3.2-rc4, v3.2-rc3, v3.2-rc2, v3.2-rc1, v3.1, v3.1-rc10, v3.1-rc9, v3.1-rc8, v3.1-rc7, v3.1-rc6, v3.1-rc5, v3.1-rc4, v3.1-rc3, v3.1-rc2, v3.1-rc1, v3.0, v3.0-rc7, v3.0-rc6, v3.0-rc5, v3.0-rc4, v3.0-rc3, v3.0-rc2, v3.0-rc1, v2.6.39, v2.6.39-rc7, v2.6.39-rc6 |
|
| #
4c2896e8 |
| 28-Apr-2011 |
Grant Likely <[email protected]> |
arm/dt: Make __vet_atags also accept a dtb image
The dtb is passed to the kernel via register r2, which is the same method that is used to pass an atags pointer. This patch modifies __vet_atags to
arm/dt: Make __vet_atags also accept a dtb image
The dtb is passed to the kernel via register r2, which is the same method that is used to pass an atags pointer. This patch modifies __vet_atags to not clear r2 when it encounters a dtb image.
v2: fixed bugs pointed out by Nicolas Pitre
Tested-by: Tony Lindgren <[email protected]> Signed-off-by: Grant Likely <[email protected]>
show more ...
|
|
Revision tags: v2.6.39-rc5, v2.6.39-rc4, v2.6.39-rc3, v2.6.39-rc2, v2.6.39-rc1, v2.6.38, v2.6.38-rc8, v2.6.38-rc7, v2.6.38-rc6, v2.6.38-rc5, v2.6.38-rc4, v2.6.38-rc3, v2.6.38-rc2, v2.6.38-rc1 |
|
| #
6fc31d54 |
| 12-Jan-2011 |
Russell King <[email protected]> |
ARM: Defer lookup of machine_type to setup.c
Since the debug macros no longer depend on the machine type information, the machine type lookup can be deferred to setup_arch() in setup.c which simplif
ARM: Defer lookup of machine_type to setup.c
Since the debug macros no longer depend on the machine type information, the machine type lookup can be deferred to setup_arch() in setup.c which simplifies the code somewhat.
We also move the __error_a functionality into setup.c for displaying a message when a bad machine ID is passed to the kernel via the LL debug code. We also log this into the kernel ring buffer which makes it possible to retrieve the message via a debugger.
Original idea from Grant Likely.
Acked-by: Grant Likely <[email protected]> Tested-by: Tony Lindgren <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
cb4d3eae |
| 15-Jan-2011 |
Russell King <[email protected]> |
ARM: fix missing branch in __error_a
When DEBUG_LL is not set, we don't want __error_a re-entering __lookup_machine_type - we want it to go to the error function. This used to be the case before we
ARM: fix missing branch in __error_a
When DEBUG_LL is not set, we don't want __error_a re-entering __lookup_machine_type - we want it to go to the error function. This used to be the case before we reorganized the layout for hotplug cpu, as we used to fall through to __error. With the changed layout, we need an explicit branch here instead.
Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v2.6.37, v2.6.37-rc8, v2.6.37-rc7, v2.6.37-rc6, v2.6.37-rc5, v2.6.37-rc4, v2.6.37-rc3, v2.6.37-rc2, v2.6.37-rc1, v2.6.36, v2.6.36-rc8, v2.6.36-rc7 |
|
| #
80924ac5 |
| 04-Oct-2010 |
Russell King <[email protected]> |
ARM: cleanup lookup_machine_type data and ensure these are placed in __HEAD
Signed-off-by: Russell King <[email protected]>
|
| #
c083c660 |
| 04-Oct-2010 |
Russell King <[email protected]> |
ARM: hotplug cpu: move __error and __error_p to cpuinit section
__error and __error_p may be used by secondary CPUs, so these need to be in the cpuinit section.
Signed-off-by: Russell King <rmk+ker
ARM: hotplug cpu: move __error and __error_p to cpuinit section
__error and __error_p may be used by secondary CPUs, so these need to be in the cpuinit section.
Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
17bb5e2c |
| 04-Oct-2010 |
Russell King <[email protected]> |
ARM: move __mmap_switched, C-API functions to init section
Move these functions, which are only ever used during boot CPU initialization, to the init section.
Signed-off-by: Russell King <rmk+kerne
ARM: move __mmap_switched, C-API functions to init section
Move these functions, which are only ever used during boot CPU initialization, to the init section.
Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
a4ae4134 |
| 04-Oct-2010 |
Russell King <[email protected]> |
ARM: cleanup boot cpu calling __mmap_switched
This allows us to relocate __mmap_switched and associated data away from the head section.
Signed-off-by: Russell King <[email protected]>
|