|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1 |
|
| #
9e9b0cf9 |
| 18-Sep-2024 |
Harith G <[email protected]> |
ARM: 9420/1: smp: Fix SMP for xip kernels
Fix the physical address calculation of the following to get smp working on xip kernels. - secondary_data needed for secondary cpu bootup. - secondary_start
ARM: 9420/1: smp: Fix SMP for xip kernels
Fix the physical address calculation of the following to get smp working on xip kernels. - secondary_data needed for secondary cpu bootup. - secondary_startup address passed through psci. - identity mapped code region needed for enabling mmu for secondary cpus.
Signed-off-by: Harith George <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
| #
ed6cbe6e |
| 18-Sep-2024 |
Harith G <[email protected]> |
ARM: 9419/1: mm: Fix kernel memory mapping for xip kernels
The patchset introducing kernel_sec_start/end variables to separate the kernel/lowmem memory mappings, broke the mapping of the kernel memo
ARM: 9419/1: mm: Fix kernel memory mapping for xip kernels
The patchset introducing kernel_sec_start/end variables to separate the kernel/lowmem memory mappings, broke the mapping of the kernel memory for xipkernels.
kernel_sec_start/end variables are in RO area before the MMU is switched on for xipkernels. So these cannot be set early in boot in head.S. Fix this by setting these after MMU is switched on. xipkernels need two different mappings for kernel text (starting at CONFIG_XIP_PHYS_ADDR) and data (starting at CONFIG_PHYS_OFFSET). Also, move the kernel code mapping from devicemaps_init() to map_kernel().
Fixes: a91da5457085 ("ARM: 9089/1: Define kernel physical section start and end") Signed-off-by: Harith George <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1 |
|
| #
a9ff6961 |
| 02-Jun-2022 |
Linus Walleij <[email protected]> |
ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explici
ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings.
Doing this is a bit intrusive: virt_to_pfn() requires PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in <asm/page.h>, so this must be included *before* <asm/memory.h>.
The use of macros were obscuring the unclear inclusion order here, as the macros would eventually be resolved, but a static inline like this cannot be compiled with unresolved macros.
The naive solution to include <asm/page.h> at the top of <asm/memory.h> does not work, because <asm/memory.h> sometimes includes <asm/page.h> at the end of itself, which would create a confusing inclusion loop. So instead, take the approach to always unconditionally include <asm/page.h> at the end of <asm/memory.h>
arch/arm uses <asm/memory.h> explicitly in a lot of places, however it turns out that if we just unconditionally include <asm/memory.h> into <asm/page.h> and switch all inclusions of <asm/memory.h> to <asm/page.h> instead, we enforce the right order and <asm/memory.h> will always have access to the definitions.
Put an inclusion guard in place making it impossible to include <asm/memory.h> explicitly.
Link: https://lore.kernel.org/linux-mm/[email protected]/ Signed-off-by: Linus Walleij <[email protected]>
show more ...
|
| #
50f6f34e |
| 29-Sep-2022 |
Arnd Bergmann <[email protected]> |
ARM: footbridge: remove CATS
Nobody seems to have a CATS machine any more, so remove it now, leaving only NetWinder and EBSA285.
Cc: Russell King <[email protected]> Acked-by: Linus Walleij <li
ARM: footbridge: remove CATS
Nobody seems to have a CATS machine any more, so remove it now, leaving only NetWinder and EBSA285.
Cc: Russell King <[email protected]> Acked-by: Linus Walleij <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
| #
39114538 |
| 05-Jul-2022 |
Mike Rapoport <[email protected]> |
ARM: head.S: rename PMD_ORDER to PMD_ENTRY_ORDER
PMD_ORDER denotes order of magnitude for a PMD entry, i.e PMD entry size is 2 ^ PMD_ORDER.
Rename PMD_ORDER to PMD_ENTRY_ORDER to allow a generic de
ARM: head.S: rename PMD_ORDER to PMD_ENTRY_ORDER
PMD_ORDER denotes order of magnitude for a PMD entry, i.e PMD entry size is 2 ^ PMD_ORDER.
Rename PMD_ORDER to PMD_ENTRY_ORDER to allow a generic definition of PMD_ORDER as order of a PMD allocation: (PMD_SHIFT - PAGE_SHIFT).
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Russell King (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2 |
|
| #
8b806b82 |
| 24-Jan-2022 |
Ard Biesheuvel <[email protected]> |
ARM: mm: switch to swapper_pg_dir early for vmap'ed stack
When onlining a CPU, switch to swapper_pg_dir as soon as possible so that it is guaranteed that the vmap'ed stack is mapped before it is use
ARM: mm: switch to swapper_pg_dir early for vmap'ed stack
When onlining a CPU, switch to swapper_pg_dir as soon as possible so that it is guaranteed that the vmap'ed stack is mapped before it is used.
Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7 |
|
| #
00568b8a |
| 21-Oct-2021 |
LABBE Corentin <[email protected]> |
ARM: 9148/1: handle CONFIG_CPU_ENDIAN_BE32 in arch/arm/kernel/head.S
My intel-ixp42x-welltech-epbx100 no longer boot since 4.14. This is due to commit 463dbba4d189 ("ARM: 9104/2: Fix Keystone 2 kern
ARM: 9148/1: handle CONFIG_CPU_ENDIAN_BE32 in arch/arm/kernel/head.S
My intel-ixp42x-welltech-epbx100 no longer boot since 4.14. This is due to commit 463dbba4d189 ("ARM: 9104/2: Fix Keystone 2 kernel mapping regression") which forgot to handle CONFIG_CPU_ENDIAN_BE32 as possible BE config.
Suggested-by: Krzysztof Hałasa <[email protected]> Fixes: 463dbba4d189 ("ARM: 9104/2: Fix Keystone 2 kernel mapping regression") Signed-off-by: Corentin Labbe <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2 |
|
| #
19f29aeb |
| 18-Sep-2021 |
Keith Packard <[email protected]> |
ARM: smp: Pass task to secondary_start_kernel
This avoids needing to compute the task pointer in this function, which will no longer be possible once we move thread_info off the stack.
Signed-off-b
ARM: smp: Pass task to secondary_start_kernel
This avoids needing to compute the task pointer in this function, which will no longer be possible once we move thread_info off the stack.
Signed-off-by: Keith Packard <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6 |
|
| #
463dbba4 |
| 09-Aug-2021 |
Linus Walleij <[email protected]> |
ARM: 9104/2: Fix Keystone 2 kernel mapping regression
This fixes a Keystone 2 regression discovered as a side effect of defining an passing the physical start/end sections of the kernel to the MMU r
ARM: 9104/2: Fix Keystone 2 kernel mapping regression
This fixes a Keystone 2 regression discovered as a side effect of defining an passing the physical start/end sections of the kernel to the MMU remapping code.
As the Keystone applies an offset to all physical addresses, including those identified and patches by phys2virt, we fail to account for this offset in the kernel_sec_start and kernel_sec_end variables.
Further these offsets can extend into the 64bit range on LPAE systems such as the Keystone 2.
Fix it like this: - Extend kernel_sec_start and kernel_sec_end to be 64bit - Add the offset also to kernel_sec_start and kernel_sec_end
As passing kernel_sec_start and kernel_sec_end as 64bit invariably incurs BE8 endianness issues I have attempted to dry-code around these.
Tested on the Vexpress QEMU model both with and without LPAE enabled.
Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately") Reported-by: Nishanth Menon <[email protected]> Suggested-by: Russell King <[email protected]> Tested-by: Grygorii Strashko <[email protected]> Tested-by: Nishanth Menon <[email protected]> Signed-off-by: Linus Walleij <[email protected]> Signed-off-by: Russell King (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5 |
|
| #
a91da545 |
| 03-Jun-2021 |
Linus Walleij <[email protected]> |
ARM: 9089/1: Define kernel physical section start and end
When we are mapping the initial sections in head.S we know very well where the start and end of the kernel image in physical memory is place
ARM: 9089/1: Define kernel physical section start and end
When we are mapping the initial sections in head.S we know very well where the start and end of the kernel image in physical memory is placed. Later on it gets hard to determine this.
Save the information into two variables named kernel_sec_start and kernel_sec_end for convenience for later work involving the physical start and end of the kernel. These variables are section-aligned corresponding to the early section mappings set up in head.S.
Signed-off-by: Linus Walleij <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
| #
b78f63f4 |
| 03-Jun-2021 |
Linus Walleij <[email protected]> |
ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET
We want to be able to compile the kernel into an address different from PAGE_OFFSET (start of lowmem) + TEXT_OFFSET, so start to pry apart the addre
ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET
We want to be able to compile the kernel into an address different from PAGE_OFFSET (start of lowmem) + TEXT_OFFSET, so start to pry apart the address of where the kernel is located from the address where the lowmem is located by defining and using KERNEL_OFFSET in a few key places.
Signed-off-by: Linus Walleij <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5 |
|
| #
10fce53c |
| 17-Nov-2020 |
Ard Biesheuvel <[email protected]> |
ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section
The early ATAGS/DT mapping code uses SECTION_SHIFT to mask low order bits of R2, and decides that no ATAGS/DTB w
ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section
The early ATAGS/DT mapping code uses SECTION_SHIFT to mask low order bits of R2, and decides that no ATAGS/DTB were provided if the resulting value is 0x0.
This means that on systems where DRAM starts at 0x0 (such as Raspberry Pi), no explicit mapping of the DT will be created if R2 points into the first 1 MB section of memory. This was not a problem before, because the decompressed kernel is loaded at the base of DRAM and mapped using sections as well, and so as long as the DT is referenced via a virtual address that uses the same translation (the linear map, in this case), things work fine.
However, commit 7a1be318f579 ("9012/1: move device tree mapping out of linear region") changes this, and now the DT is referenced via a virtual address that is disjoint from the linear mapping of DRAM, and so we need the early code to create the DT mapping unconditionally.
So let's create the early DT mapping for any value of R2 != 0x0.
Reported-by: "kernelci.org bot" <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6 |
|
| #
3bcf906b |
| 14-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET
Replace the open coded arithmetic with a simple adr_l/sub pair. This removes some open coded arithmetic involving virtual addresse
ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET
Replace the open coded arithmetic with a simple adr_l/sub pair. This removes some open coded arithmetic involving virtual addresses, avoids literal pools on v7+, and slightly reduces the footprint of the code.
Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
59d2f282 |
| 14-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: head: use PC-relative insn sequence for __smp_alt
Now that calling __do_fixup_smp_on_up() can be done without passing the physical-to-virtual offset in r3, we can replace the open coded PC rela
ARM: head: use PC-relative insn sequence for __smp_alt
Now that calling __do_fixup_smp_on_up() can be done without passing the physical-to-virtual offset in r3, we can replace the open coded PC relative offset calculations with a pair of adr_l invocations. This removes some open coded arithmetic involving virtual addresses, avoids literal pools on v7+, and slightly reduces the footprint of the code.
Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
450abd38 |
| 14-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: kernel: use relative references for UP/SMP alternatives
Currently, the .alt.smp.init section contains the virtual addresses of the patch sites. Since patching may occur both before and after sw
ARM: kernel: use relative references for UP/SMP alternatives
Currently, the .alt.smp.init section contains the virtual addresses of the patch sites. Since patching may occur both before and after switching into virtual mode, this requires some manual handling of the address when applying the UP alternative.
Let's simplify this by using relative offsets in the table entries: this allows us to simply add each entry's address to its contents, regardless of whether we are running in virtual mode or not.
Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
91580f0d |
| 14-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: head.S: use PC-relative insn sequence for secondary_data
Replace the open coded PC relative offset calculations with adr_l and ldr_l invocations. This removes some open coded arithmetic involvi
ARM: head.S: use PC-relative insn sequence for secondary_data
Replace the open coded PC relative offset calculations with adr_l and ldr_l invocations. This removes some open coded arithmetic involving virtual addresses, avoids literal pools on v7+, and slightly reduces the footprint of the code.
Note that it also removes a stale comment about the contents of r6.
Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
172c34c9 |
| 14-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: head-common.S: use PC-relative insn sequence for idmap creation
Replace the open coded PC relative offset calculations involving __turn_mmu_on and __turn_mmu_on_end with a pair of adr_l invocat
ARM: head-common.S: use PC-relative insn sequence for idmap creation
Replace the open coded PC relative offset calculations involving __turn_mmu_on and __turn_mmu_on_end with a pair of adr_l invocations. This removes some open coded arithmetic involving virtual addresses, avoids literal pools on v7+, and slightly reduces the footprint of the code.
Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
eae78e1a |
| 20-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: p2v: move patching code to separate assembler source file
Move the phys2virt patching code into a separate .S file before doing some work on it.
Suggested-by: Nicolas Pitre <[email protected]>
ARM: p2v: move patching code to separate assembler source file
Move the phys2virt patching code into a separate .S file before doing some work on it.
Suggested-by: Nicolas Pitre <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
4e79f021 |
| 20-Sep-2020 |
Ard Biesheuvel <[email protected]> |
ARM: p2v: fix handling of LPAE translation in BE mode
When running in BE mode on LPAE hardware with a PA-to-VA translation that exceeds 4 GB, we patch bits 39:32 of the offset into the wrong byte of
ARM: p2v: fix handling of LPAE translation in BE mode
When running in BE mode on LPAE hardware with a PA-to-VA translation that exceeds 4 GB, we patch bits 39:32 of the offset into the wrong byte of the opcode. So fix that, by rotating the offset in r0 to the right by 8 bits, which will put the 8-bit immediate in bits 31:24.
Note that this will also move bit #22 in its correct place when applying the rotation to the constant #0x400000.
Fixes: d9a790df8e984 ("ARM: 7883/1: fix mov to mvn conversion in case of 64 bit phys_addr_t and BE") Acked-by: Nicolas Pitre <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
show more ...
|
| #
7a1be318 |
| 11-Oct-2020 |
Ard Biesheuvel <[email protected]> |
ARM: 9012/1: move device tree mapping out of linear region
On ARM, setting up the linear region is tricky, given the constraints around placement and alignment of the memblocks, and how the kernel i
ARM: 9012/1: move device tree mapping out of linear region
On ARM, setting up the linear region is tricky, given the constraints around placement and alignment of the memblocks, and how the kernel itself as well as the DT are placed in physical memory.
Let's simplify matters a bit, by moving the device tree mapping to the top of the address space, right between the end of the vmalloc region and the start of the the fixmap region, and create a read-only mapping for it that is independent of the size of the linear region, and how it is organized.
Since this region was formerly used as a guard region, which will now be populated fully on LPAE builds by this read-only mapping (which will still be able to function as a guard region for stray writes), bump the start of the [underutilized] fixmap region by 512 KB as well, to ensure that there is always a proper guard region here. Doing so still leaves ample room for the fixmap space, even with NR_CPUS set to its maximum value of 32.
Tested-by: Linus Walleij <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1 |
|
| #
65fddcfc |
| 09-Jun-2020 |
Mike Rapoport <[email protected]> |
mm: reorder includes after introduction of linux/pgtable.h
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include of the latter in the middle of asm includes. Fix this up with t
mm: reorder includes after introduction of linux/pgtable.h
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include of the latter in the middle of asm includes. Fix this up with the aid of the below script and manual adjustments here and there.
import sys import re
if len(sys.argv) is not 3: print "USAGE: %s <file> <header>" % (sys.argv[0]) sys.exit(1)
hdr_to_move="#include <linux/%s>" % sys.argv[2] moved = False in_hdrs = False
with open(sys.argv[1], "r") as f: lines = f.readlines() for _line in lines: line = _line.rstrip(' ') if line == hdr_to_move: continue if line.startswith("#include <linux/"): in_hdrs = True elif not moved and in_hdrs: moved = True print hdr_to_move print line
Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
ca5999fd |
| 09-Jun-2020 |
Mike Rapoport <[email protected]> |
mm: introduce include/linux/pgtable.h
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgta
mm: introduce include/linux/pgtable.h
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h.
Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
d2912cb1 |
| 04-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2 |
|
| #
bc2eca9a |
| 09-Nov-2018 |
Nicolas Pitre <[email protected]> |
ARM: 8811/1: always list both ldrd/strd registers explicitly
The ldrd and strd instructions work on a pair of consecutive registers. It is possible to specify either the first register in the pair,
ARM: 8811/1: always list both ldrd/strd registers explicitly
The ldrd and strd instructions work on a pair of consecutive registers. It is possible to specify either the first register in the pair, or both registers explicitly. Let's always do the later to make things clearer.
Signed-off-by: Nicolas Pitre <[email protected]> Suggested-by: Robin Murphy <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|
|
Revision tags: v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3 |
|
| #
1abd3502 |
| 26-Jul-2017 |
Russell King <[email protected]> |
ARM: align .data section
Robert Jarzmik reports that his PXA25x system fails to boot with 4.12, failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215:
0xc0019e20 <+0>: ldr r1,
ARM: align .data section
Robert Jarzmik reports that his PXA25x system fails to boot with 4.12, failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215:
0xc0019e20 <+0>: ldr r1, [pc, #788] 0xc0019e24 <+4>: ldr r0, [r1] <== here
with r1 containing 0xc06f82cd, which is the address of "clean_addr". Examination of the System.map shows:
c06f22c8 D user_pmd_table c06f22cc d __warned.19178 c06f22cd d clean_addr
indicating that a .data.unlikely section has appeared just before the .data section from proc-xscale.S. According to objdump -h, it appears that our assembly files default their .data alignment to 2**0, which is bad news if the preceding .data section size is not power-of-2 aligned at link time.
Add the appropriate .align directives to all assembly files in arch/arm that are missing them where we require an appropriate alignment.
Reported-by: Robert Jarzmik <[email protected]> Tested-by: Robert Jarzmik <[email protected]> Signed-off-by: Russell King <[email protected]>
show more ...
|