|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7 |
|
| #
51ecb29f |
| 11-Mar-2025 |
Anshuman Khandual <[email protected]> |
arm64/mm: Define PTDESC_ORDER
Address bytes shifted with a single 64 bit page table entry (any page table level) has been always hard coded as 3 (aka 2^3 = 8). Although intuitive it is not very read
arm64/mm: Define PTDESC_ORDER
Address bytes shifted with a single 64 bit page table entry (any page table level) has been always hard coded as 3 (aka 2^3 = 8). Although intuitive it is not very readable or easy to reason about. Besides it is going to change with D128, where each 128 bit page table entry will shift address bytes by 4 (aka 2^4 = 16) instead.
Let's just formalise this address bytes shift value into a new macro called PTDESC_ORDER establishing a logical abstraction, thus improving readability as well. While here re-organize EARLY_LEVEL macro along with its dependents for better clarity. This does not cause any functional change. Also replace all (PAGE_SHIFT - PTDESC_ORDER) instances with PTDESC_TABLE_SHIFT.
Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Acked-by: Ard Biesheuvel <[email protected]> Reviewed-by: Ryan Roberts <[email protected]> Signed-off-by: Anshuman Khandual <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4 |
|
| #
0f612c6e |
| 14-Oct-2024 |
Gavin Shan <[email protected]> |
arm64: head: Drop SWAPPER_TABLE_SHIFT
There is no users of SWAPPER_TABLE_SHIFT after commit 84b04d3e6bdb ("arm64: kernel: Create initial ID map from C code"). Just drop it.
Signed-off-by: Gavin Sha
arm64: head: Drop SWAPPER_TABLE_SHIFT
There is no users of SWAPPER_TABLE_SHIFT after commit 84b04d3e6bdb ("arm64: kernel: Create initial ID map from C code"). Just drop it.
Signed-off-by: Gavin Shan <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5 |
|
| #
84b04d3e |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: kernel: Create initial ID map from C code
The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things l
arm64: kernel: Create initial ID map from C code
The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things like LPA2 or WXN more difficult than necessary. Also, it is parameterized like the rest of the MM code to run with a configurable number of levels, which is rather pointless, given that all AArch64 CPUs implement support for 48-bit virtual addressing, and that many systems exist with DRAM located outside of the 39-bit addressable range, which is the only smaller VA size that is widely used, and we need additional tricks to make things work in that combination.
So let's bite the bullet, and rip out all the asm macros, and fiddly code, and replace it with a C implementation based on the newly added routines for creating the early kernel VA mappings. And while at it, create the initial ID map based on 48-bit virtual addressing as well, regardless of the number of configured levels for the kernel proper.
Note that this code may execute with the MMU and caches disabled, and is therefore not permitted to make unaligned accesses. This shouldn't generally happen in any case for the algorithm as implemented, but to be sure, let's pass -mstrict-align to the compiler just in case.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
34b98e55 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels
The mapping from PGD/PUD/PMD to levels and shifts is very confusing, given that, due to folding, the shifts may be equal for differ
arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels
The mapping from PGD/PUD/PMD to levels and shifts is very confusing, given that, due to folding, the shifts may be equal for different levels, if the macros are even #define'd to begin with.
In a subsequent patch, we will modify the ID mapping code to decouple the number of levels from the kernel's view of how these types are folded, so prepare for this by reformulating the macros without the use of these types.
Instead, use SWAPPER_BLOCK_SHIFT as the base quantity, and derive it from either PAGE_SHIFT or PMD_SHIFT, which -if defined at all- are defined unambiguously for a given page size, regardless of the number of configured levels.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
e6128a8e |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: mm: Use 48-bit virtual addressing for the permanent ID map
Even though we support loading kernels anywhere in 48-bit addressable physical memory, we create the ID maps based on the number of
arm64: mm: Use 48-bit virtual addressing for the permanent ID map
Even though we support loading kernels anywhere in 48-bit addressable physical memory, we create the ID maps based on the number of levels that we happened to configure for the kernel VA and user VA spaces.
The reason for this is that the PGD/PUD/PMD based classification of translation levels, along with the associated folding when the number of levels is less than 5, does not permit creating a page table hierarchy of a set number of levels. This means that, for instance, on 39-bit VA kernels we need to configure an additional level above PGD level on the fly, and 36-bit VA kernels still only support 47-bit virtual addressing with this trick applied.
Now that we have a separate helper to populate page table hierarchies that does not define the levels in terms of PUDS/PMDS/etc at all, let's reuse it to create the permanent ID map with a fixed VA size of 48 bits.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
8d47b8e5 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: head: allocate more pages for the kernel mapping
In preparation for switching to an early kernel mapping routine that maps each segment according to its precise boundaries, and with the corre
arm64: head: allocate more pages for the kernel mapping
In preparation for switching to an early kernel mapping routine that maps each segment according to its precise boundaries, and with the correct attributes, let's allocate some extra pages for page tables for the 4k page size configuration. This is necessary because the start and end of each segment may not be aligned to the block size, and so we'll need an extra page table at each segment boundary.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4 |
|
| #
a22fc8e1 |
| 29-Nov-2023 |
Ard Biesheuvel <[email protected]> |
arm64: mm: Take potential load offset into account when KASLR is off
We enable CONFIG_RELOCATABLE even when CONFIG_RANDOMIZE_BASE is disabled, and this permits the loader (i.e., EFI) to place the ke
arm64: mm: Take potential load offset into account when KASLR is off
We enable CONFIG_RELOCATABLE even when CONFIG_RANDOMIZE_BASE is disabled, and this permits the loader (i.e., EFI) to place the kernel anywhere in physical memory as long as the base address is 64k aligned.
This means that the 'KASLR' case described in the header that defines the size of the statically allocated page tables could take effect even when CONFIG_RANDMIZE_BASE=n. So check for CONFIG_RELOCATABLE instead.
Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
4e0bacd6 |
| 04-Aug-2023 |
Zhang Jianhua <[email protected]> |
arm64: fix build warning for ARM64_MEMSTART_SHIFT
When building with W=1, the following warning occurs.
arch/arm64/include/asm/kernel-pgtable.h:129:41: error: "PUD_SHIFT" is not defined, evaluates
arm64: fix build warning for ARM64_MEMSTART_SHIFT
When building with W=1, the following warning occurs.
arch/arm64/include/asm/kernel-pgtable.h:129:41: error: "PUD_SHIFT" is not defined, evaluates to 0 [-Werror=undef] 129 | #define ARM64_MEMSTART_SHIFT PUD_SHIFT | ^~~~~~~~~ arch/arm64/include/asm/kernel-pgtable.h:142:5: note: in expansion of macro ‘ARM64_MEMSTART_SHIFT’ 142 | #if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS | ^~~~~~~~~~~~~~~~~~~~
The generic PUD_SHIFT was defined in include/asm-generic/pgtable-nopud.h, however the #ifndef __ASSEMBLY__ guard in this header file makes it unavailable for assembly files. While someone .S file include the <asm/kernel-pgtable.h>, the build warning would occur. Now move the macro ARM64_MEMSTART_SHIFT and ARM64_MEMSTART_ALIGN to arch/arm64/mm/init.c where it is used only, to avoid this issue.
Signed-off-by: Zhang Jianhua <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6 |
|
| #
f0af339f |
| 06-Jun-2023 |
Joey Gouly <[email protected]> |
arm64: add PTE_UXN/PTE_WRITE to SWAPPER_*_FLAGS
With PIE enabled, the swapper PTEs would have a Permission Indirection Index (PIIndex) of 0. A PIIndex of 0 is not currently used by any other PTEs.
arm64: add PTE_UXN/PTE_WRITE to SWAPPER_*_FLAGS
With PIE enabled, the swapper PTEs would have a Permission Indirection Index (PIIndex) of 0. A PIIndex of 0 is not currently used by any other PTEs.
To avoid using index 0 specifically for the swapper PTEs, mark them as PTE_UXN and PTE_WRITE, so that they map to a PAGE_KERNEL_EXEC equivalent.
This also adds PTE_WRITE to KPTI_NG_PTE_FLAGS, which was tested by booting with kpti=on.
Signed-off-by: Joey Gouly <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6 |
|
| #
414c109b |
| 06-Apr-2023 |
Mark Rutland <[email protected]> |
arm64: mm: always map fixmap at page granularity
Today the fixmap code largely maps elements at PAGE_SIZE granularity, but we special-case the FDT mapping such that it can be mapped with 2M block ma
arm64: mm: always map fixmap at page granularity
Today the fixmap code largely maps elements at PAGE_SIZE granularity, but we special-case the FDT mapping such that it can be mapped with 2M block mappings when 4K pages are in use. The original rationale for this was simplicity, but it has some unfortunate side-effects, and complicates portions of the fixmap code (i.e. is not so simple after all).
The FDT can be up to 2M in size but is only required to have 8-byte alignment, and so it may straddle a 2M boundary. Thus when using 2M block mappings we may map up to 4M of memory surrounding the FDT. This is unfortunate as most of that memory will be unrelated to the FDT, and any pages which happen to share a 2M block with the FDT will by mapped with Normal Write-Back Cacheable attributes, which might not be what we want elsewhere (e.g. for carve-outs using Non-Cacheable attributes).
The logic to handle mapping the FDT with 2M blocks requires some special cases in the fixmap code, and ties it to the early page table configuration by virtue of the SWAPPER_TABLE_SHIFT and SWAPPER_BLOCK_SIZE constants used to determine the granularity used to map the FDT.
This patch simplifies the FDT logic and removes the unnecessary mappings of surrounding pages by always mapping the FDT at page granularity as with all other fixmap mappings. To do so we statically reserve multiple PTE tables to cover the fixmap VA range. Since the FDT can be at most 2M, for 4K pages we only need to allocate a single additional PTE table, and for 16K and 64K pages the existing single PTE table is sufficient.
The PTE table allocation scales with the number of slots reserved in the fixmap, and so this also makes it easier to add more fixmap entries if we require those in future.
Our VA layout means that the fixmap will always fall within a single PMD table (and consequently, within a single PUD/P4D/PGD entry), which we can verify at compile time with a static_assert(). With that assert a number of runtime warnings become impossible, and are removed.
I've boot-tested this patch with both 4K and 64K pages.
Signed-off-by: Mark Rutland <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ryan Roberts <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5 |
|
| #
38e4b660 |
| 08-Nov-2022 |
Anshuman Khandual <[email protected]> |
arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel mapping at PMD (aka huge page aka block) level, is only applicable with 4K base p
arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel mapping at PMD (aka huge page aka block) level, is only applicable with 4K base page, which makes it 2MB aligned, a necessary requirement for linear mapping and physical memory start address. This can be easily achieved by directly checking against base page size itself. This drops off the macro ARM64_KERNE_USES_PMD_MAPS which is redundant.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3 |
|
| #
5fbc49ce |
| 26-Aug-2022 |
Ard Biesheuvel <[email protected]> |
arm64: mm: Reserve enough pages for the initial ID map
The logic that conditionally allocates one additional page at each swapper page table level if KASLR is enabled is also applied to the initial
arm64: mm: Reserve enough pages for the initial ID map
The logic that conditionally allocates one additional page at each swapper page table level if KASLR is enabled is also applied to the initial ID map, now that we have started using the same set of macros to allocate the space for it.
However, the placement of the kernel in physical memory might result in additional pages being needed at any level, even if KASLR is disabled in the build. So account for this in the computation.
Fixes: c3cee924bd85 ("arm64: head: cover entire kernel image in initial ID map") Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4 |
|
| #
f70b3a23 |
| 24-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: head: create a temporary FDT mapping in the initial ID map
We need to access the DT very early to get at the command line and the KASLR seed, which currently means we rely on some hacks to ca
arm64: head: create a temporary FDT mapping in the initial ID map
We need to access the DT very early to get at the command line and the KASLR seed, which currently means we rely on some hacks to call into the kernel before really calling into the kernel, which is undesirable.
So instead, let's create a mapping for the FDT in the initial ID map, which is feasible now that it has been extended to cover more than a single page or block, and can be updated in place to remap other output addresses.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
c3cee924 |
| 24-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: head: cover entire kernel image in initial ID map
As a first step towards avoiding the need to create, tear down and recreate the kernel virtual mapping with MMU and caches disabled, start by
arm64: head: cover entire kernel image in initial ID map
As a first step towards avoiding the need to create, tear down and recreate the kernel virtual mapping with MMU and caches disabled, start by expanding the ID map so it covers the page tables as well as all executable code. This will allow us to populate the page tables with the MMU and caches on, and call KASLR init code before setting up the virtual mapping.
Since this ID map is only needed at boot, create it as a temporary set of page tables, and populate the permanent ID map after enabling the MMU and caches. While at it, switch to read-only attributes for the where possible, as writable permissions are only needed for the initial kernel page tables. Note that on 4k granule configurations, the permanent ID map will now be reduced to a single page rather than a 2M block mapping.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14 |
|
| #
90268574 |
| 23-Aug-2021 |
Mark Rutland <[email protected]> |
arm64: head: avoid over-mapping in map_memory
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclus
arm64: head: avoid over-mapping in map_memory
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds.
We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then:
* In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping.
* In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map.
As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to.
The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this.
Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses.
Fixes: 0370b31e4845 ("arm64: Extend early page table code to allow for larger kernels") Cc: <[email protected]> # 4.16.x Signed-off-by: Mark Rutland <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Steve Capper <[email protected]> Cc: Will Deacon <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7 |
|
| #
2062d44d |
| 18-Jun-2021 |
Anshuman Khandual <[email protected]> |
arm64/mm: Rename ARM64_SWAPPER_USES_SECTION_MAPS
ARM64_SWAPPER_USES_SECTION_MAPS implies that a PMD level huge page mappings are used for swapper, idmap and vmemmap. Lets make it PMD explicit removi
arm64/mm: Rename ARM64_SWAPPER_USES_SECTION_MAPS
ARM64_SWAPPER_USES_SECTION_MAPS implies that a PMD level huge page mappings are used for swapper, idmap and vmemmap. Lets make it PMD explicit removing any possible confusion with generic memory sections and also bit generic as it's applicable for idmap and vmemmap mappings as well. Hence rename it as ARM64_KERNEL_USES_PMD_MAPS instead.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Acked-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
4aaa87ab |
| 14-Jun-2021 |
Anshuman Khandual <[email protected]> |
arm64/mm: Drop SECTION_[SHIFT|SIZE|MASK]
SECTION_[SHIFT|SIZE|MASK] are essentially PMD_[SHIFT|SIZE|MASK]. But these create confusion being similar to generic sparsemem memory sections, which are der
arm64/mm: Drop SECTION_[SHIFT|SIZE|MASK]
SECTION_[SHIFT|SIZE|MASK] are essentially PMD_[SHIFT|SIZE|MASK]. But these create confusion being similar to generic sparsemem memory sections, which are derived from SECTION_SIZE_BITS. Section references have always implied PMD level block mapping. Instead just use all PMD level macros which would make it explicit and also remove confusion with sparsmem memory sections.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Acked-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
ca6ece6a |
| 14-Jun-2021 |
Anshuman Khandual <[email protected]> |
arm64/mm: Use CONT_PMD_SHIFT for ARM64_MEMSTART_SHIFT
ARM64_MEMSTART_SIZE needs to be aligned with CONT_PMD_SIZE on 16K page size config. Hence just directly use CONT_PMD_SHIFT.
Cc: Catalin Marinas
arm64/mm: Use CONT_PMD_SHIFT for ARM64_MEMSTART_SHIFT
ARM64_MEMSTART_SIZE needs to be aligned with CONT_PMD_SIZE on 16K page size config. Hence just directly use CONT_PMD_SHIFT.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Acked-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
0f473ac7 |
| 14-Jun-2021 |
Anshuman Khandual <[email protected]> |
arm64/mm: Drop SWAPPER_INIT_MAP_SIZE
The commit cdef5f6e9e0e ("arm64: mm: allocate pagetables anywhere") had dropped the last reference to SWAPPER_INIT_MAP_SIZE. Hence just clean up.
Cc: Catalin Ma
arm64/mm: Drop SWAPPER_INIT_MAP_SIZE
The commit cdef5f6e9e0e ("arm64: mm: allocate pagetables anywhere") had dropped the last reference to SWAPPER_INIT_MAP_SIZE. Hence just clean up.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12 |
|
| #
782276b4 |
| 20-Apr-2021 |
Catalin Marinas <[email protected]> |
arm64: Force SPARSEMEM_VMEMMAP as the only memory management model
Currently arm64 allows a choice of FLATMEM, SPARSEMEM and SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
arm64: Force SPARSEMEM_VMEMMAP as the only memory management model
Currently arm64 allows a choice of FLATMEM, SPARSEMEM and SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM does not seem to boot in certain configurations (guest under KVM with Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K pages) or 29 (64K page), there's little argument against the memory wasted by the mem_map array with SPARSEMEM.
Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and remove the corresponding #ifdefs under arch/arm64/.
Signed-off-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Acked-by: Will Deacon <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Acked-by: Marc Zyngier <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Acked-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3 |
|
| #
833be850 |
| 03-Nov-2020 |
Mark Rutland <[email protected]> |
arm64: consistently use reserved_pg_dir
Depending on configuration options and specific code paths, we either use the empty_zero_page or the configuration-dependent reserved_ttbr0 as a reserved valu
arm64: consistently use reserved_pg_dir
Depending on configuration options and specific code paths, we either use the empty_zero_page or the configuration-dependent reserved_ttbr0 as a reserved value for TTBR{0,1}_EL1.
To simplify this code, let's always allocate and use the same reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated (and hence pre-zeroed), and is also marked as read-only in the kernel Image mapping.
Keeping this separate from the empty_zero_page potentially helps with robustness as the empty_zero_page is used in a number of cases where a failure to map it read-only could allow it to become corrupted.
The (presently unused) swapper_pg_end symbol is also removed, and comments are added wherever we rely on the offsets between the pre-allocated pg_dirs to keep these cases easily identifiable.
Signed-off-by: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3 |
|
| #
120dc60d |
| 25-Aug-2020 |
Ard Biesheuvel <[email protected]> |
arm64: get rid of TEXT_OFFSET
TEXT_OFFSET serves no purpose, and for this reason, it was redefined as 0x0 in the v5.8 timeframe. Since this does not appear to have caused any issues that require us
arm64: get rid of TEXT_OFFSET
TEXT_OFFSET serves no purpose, and for this reason, it was redefined as 0x0 in the v5.8 timeframe. Since this does not appear to have caused any issues that require us to revisit that decision, let's get rid of the macro entirely, along with any references to it.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4 |
|
| #
5f1f7f6c |
| 30-Jun-2020 |
Will Deacon <[email protected]> |
arm64: Reduce the number of header files pulled into vmlinux.lds.S
Although vmlinux.lds.S smells like an assembly file and is compiled with __ASSEMBLY__ defined, it's actually just fed to the prepro
arm64: Reduce the number of header files pulled into vmlinux.lds.S
Although vmlinux.lds.S smells like an assembly file and is compiled with __ASSEMBLY__ defined, it's actually just fed to the preprocessor to create our linker script. This means that any assembly macros defined by headers that it includes will result in a helpful link error:
| aarch64-linux-gnu-ld:./arch/arm64/kernel/vmlinux.lds:1: syntax error
In preparation for an arm64-private asm/rwonce.h implementation, which will end up pulling assembly macros into linux/compiler.h, reduce the number of headers we include directly and transitively in vmlinux.lds.S
Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.8-rc3, v5.8-rc2, v5.8-rc1 |
|
| #
ca5999fd |
| 09-Jun-2020 |
Mike Rapoport <[email protected]> |
mm: introduce include/linux/pgtable.h
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgta
mm: introduce include/linux/pgtable.h
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h.
Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
caab277b |
| 03-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 503 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|