|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1 |
|
| #
06668257 |
| 24-May-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove page_mapping()
All callers are now converted, delete this compatibility wrapper. Also fix up some comments which referred to page_mapping.
Link: https://lkml.kernel.org/r/20240423225552
mm: remove page_mapping()
All callers are now converted, delete this compatibility wrapper. Also fix up some comments which referred to page_mapping.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Sidhartha Kumar <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7 |
|
| #
6766a8ef |
| 16-Oct-2023 |
Mark Rutland <[email protected]> |
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC
In icache_inval_all_pou() we use cpus_have_const_cap() to check for ARM64_HAS_CACHE_DIC, but this is not necessary and alternative_has_cap_
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC
In icache_inval_all_pou() we use cpus_have_const_cap() to check for ARM64_HAS_CACHE_DIC, but this is not necessary and alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching.
The cpus_have_const_cap() check in icache_inval_all_pou() is an optimization to skip a redundant (but benign) IC IALLUIS + DSB ISH sequence when all CPUs in the system have DIC. In the window between detecting the ARM64_HAS_CACHE_DIC cpucap and patching alternative branches there is only a single potential call to icache_inval_all_pou() (in the alternatives patching itself), which there's no need to optimize for at the expense of other callers.
This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. This also aligns better with the way we patch the assembly cache maintenance sequences in arch/arm64/mm/cache.S.
Signed-off-by: Mark Rutland <[email protected]> Cc: Suzuki K Poulose <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
4a169d61 |
| 02-Aug-2023 |
Matthew Wilcox (Oracle) <[email protected]> |
arm64: implement the new page table range API
Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio.
Link: https://lkm
arm64: implement the new page table range API
Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Acked-by: Mike Rapoport (IBM) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2 |
|
| #
7eacf185 |
| 10-Jun-2022 |
Will Deacon <[email protected]> |
arm64: mm: Remove assembly DMA cache maintenance wrappers
Remove the __dma_{flush,map,unmap}_area assembly wrappers and call the appropriate cache maintenance functions directly from the DMA mapping
arm64: mm: Remove assembly DMA cache maintenance wrappers
Remove the __dma_{flush,map,unmap}_area assembly wrappers and call the appropriate cache maintenance functions directly from the DMA mapping callbacks.
Reviewed-by: Catalin Marinas <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1 |
|
| #
6d47c23b |
| 08-Jul-2021 |
Mike Rapoport <[email protected]> |
set_memory: allow querying whether set_direct_map_*() is actually enabled
On arm64, set_direct_map_*() functions may return 0 without actually changing the linear map. This behaviour can be control
set_memory: allow querying whether set_direct_map_*() is actually enabled
On arm64, set_direct_map_*() functions may return 0 without actually changing the linear map. This behaviour can be controlled using kernel parameters, so we need a way to determine at runtime whether calls to set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have any effect.
Extend set_memory API with can_set_direct_map() function that allows checking if calling set_direct_map_*() will actually change the page table, replace several occurrences of open coded checks in arm64 with the new function and provide a generic stub for architectures that always modify page tables upon calls to set_direct_map APIs.
[[email protected]: arm64: kfence: fix header inclusion ]
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Acked-by: James Bottomley <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christopher Lameter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Elena Reshetova <[email protected]> Cc: Hagen Paul Pfeifer <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Bottomley <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rick Edgecombe <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tycho Andersen <[email protected]> Cc: Will Deacon <[email protected]> Cc: kernel test robot <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4 |
|
| #
fade9c2c |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: Rename arm64-internal cache maintenance functions
Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in t
arm64: Rename arm64-internal cache maintenance functions
Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in the Arm Architecture reference manual, and might be interpreted to mean clean, invalidate, or both for a cache.
Rename arm64-internal functions to make the naming internally consistent, as well as making it consistent with the Arm ARM, by specifying whether it applies to the instruction, data, or both caches, whether the operation is a clean, invalidate, or both. Also specify which point the operation applies to, i.e., to the point of unification (PoU), coherency (PoC), or persistence (PoP).
This commit applies the following sed transformation to all files under arch/arm64:
"s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\ "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\ "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\ "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\ "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\ "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\ "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\ "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\ "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\ "s/\b__flush_icache_all\b/icache_inval_all_pou/g;"
Note that __clean_dcache_area_poc is deliberately missing a word boundary check at the beginning in order to match the efistub symbols in image-vars.h.
Also note that, despite its name, __flush_icache_range operates on both instruction and data caches. The name change here reflects that.
No functional change intended.
Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
393239be |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: Fix cache maintenance function comments
Fix and expand comments for the cache maintenance functions in cacheflush.h. Adds comments to functions that weren't described before. Explains what th
arm64: Fix cache maintenance function comments
Fix and expand comments for the cache maintenance functions in cacheflush.h. Adds comments to functions that weren't described before. Explains what the functions do using Arm Architecture Reference Manual terminology.
No functional change intended.
Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
8c28d52c |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: sync_icache_aliases to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to spec
arm64: sync_icache_aliases to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size.
No functional change intended.
Reported-by: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
406d7d4e |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: __clean_dcache_area_pou to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to
arm64: __clean_dcache_area_pou to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size.
No functional change intended.
Reported-by: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
f749448e |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: __clean_dcache_area_pop to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to
arm64: __clean_dcache_area_pop to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size.
No functional change intended.
Reported-by: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
1f42faf1 |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: __clean_dcache_area_poc to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to
arm64: __clean_dcache_area_poc to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size.
Because the code is shared with __dma_clean_area, it changes the parameters for that as well. However, __dma_clean_area is local to cache.S, so no other users are affected.
No functional change intended.
Reported-by: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
814b1860 |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: __flush_dcache_area to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to spec
arm64: __flush_dcache_area to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size.
No functional change intended.
Reported-by: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
e3974adb |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: __inval_dcache_area to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to spec
arm64: __inval_dcache_area to take end parameter instead of size
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size.
Because the code is shared with __dma_inv_area, it changes the parameters for that as well. However, __dma_inv_area is local to cache.S, so no other users are affected.
No functional change intended.
Reported-by: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
7908072d |
| 24-May-2021 |
Fuad Tabba <[email protected]> |
arm64: Do not enable uaccess for invalidate_icache_range
invalidate_icache_range() works on kernel addresses, and doesn't need uaccess. Remove the code that toggles uaccess_ttbr0_enable, as well as
arm64: Do not enable uaccess for invalidate_icache_range
invalidate_icache_range() works on kernel addresses, and doesn't need uaccess. Remove the code that toggles uaccess_ttbr0_enable, as well as the code that emits an entry into the exception table (via the macro invalidate_icache_by_line).
Changes return type of invalidate_icache_range() from int (which used to indicate a fault) to void, since it doesn't need uaccess and won't fault. Note that return value was never checked by any of the callers.
No functional change intended. Possible performance impact due to the reduced number of instructions.
Reported-by: Catalin Marinas <[email protected]> Reported-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/ Acked-by: Mark Rutland <[email protected]> Signed-off-by: Fuad Tabba <[email protected]> Acked-by: Catalin Marinas <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6 |
|
| #
1e193c70 |
| 25-Jan-2021 |
Shaokun Zhang <[email protected]> |
arm64: cacheflush: Remove stale comment
Remove stale comment since commit a7ba121215fa ("arm64: use asm-generic/cacheflush.h")
Cc: Christoph Hellwig <[email protected]> Cc: Andrew Morton <akpm@linux-found
arm64: cacheflush: Remove stale comment
Remove stale comment since commit a7ba121215fa ("arm64: use asm-generic/cacheflush.h")
Cc: Christoph Hellwig <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Shaokun Zhang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1 |
|
| #
32a0de88 |
| 15-Dec-2020 |
Mike Rapoport <[email protected]> |
arch, mm: make kernel_page_present() always available
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful rega
arch, mm: make kernel_page_present() always available
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part of set_memory API and remove ugly ifdefery in inlcude/linux/mm.h around current declarations of kernel_page_present().
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Cc: Albert Ou <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: David Rientjes <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: "Edgecombe, Rick P" <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Len Brown <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Pavel Machek <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rafael J. Wysocki <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1 |
|
| #
a7ba1212 |
| 08-Jun-2020 |
Christoph Hellwig <[email protected]> |
arm64: use asm-generic/cacheflush.h
ARM64 needs almost no cache flushing routines of its own. Rely on asm-generic/cacheflush.h for the defaults.
Signed-off-by: Christoph Hellwig <[email protected]> Signe
arm64: use asm-generic/cacheflush.h
ARM64 needs almost no cache flushing routines of its own. Rely on asm-generic/cacheflush.h for the defaults.
Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5 |
|
| #
ab8ad279 |
| 04-May-2020 |
Daniel Thompson <[email protected]> |
arm64: cacheflush: Fix KGDB trap detection
flush_icache_range() contains a bodge to avoid issuing IPIs when the kgdb trap handler is running because issuing IPIs is unsafe (and not needed) in this e
arm64: cacheflush: Fix KGDB trap detection
flush_icache_range() contains a bodge to avoid issuing IPIs when the kgdb trap handler is running because issuing IPIs is unsafe (and not needed) in this execution context. However the current test, based on kgdb_connected is flawed: it both over-matches and under-matches.
The over match occurs because kgdb_connected is set when gdb attaches to the stub and remains set during normal running. This is relatively harmelss because in almost all cases irq_disabled() will be false.
The under match is more serious. When kdb is used instead of kgdb to access the debugger then kgdb_connected is not set in all the places that the debug core updates sw breakpoints (and hence flushes the icache). This can lead to deadlock.
Fix by replacing the ad-hoc check with the proper kgdb macro. This also allows us to drop the #ifdef wrapper.
Fixes: 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings") Signed-off-by: Daniel Thompson <[email protected]> Reviewed-by: Douglas Anderson <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3 |
|
| #
e43f1331 |
| 20-Feb-2020 |
James Morse <[email protected]> |
arm64: Ask the compiler to __always_inline functions used by KVM at HYP
KVM uses some of the static-inline helpers like icache_is_vipt() from its HYP code. This assumes the function is inlined so th
arm64: Ask the compiler to __always_inline functions used by KVM at HYP
KVM uses some of the static-inline helpers like icache_is_vipt() from its HYP code. This assumes the function is inlined so that the code is mapped to EL2. The compiler may decide not to inline these, and the out-of-line version may not be in the __hyp_text section.
Add the additional __always_ hint to these static-inlines that are used by KVM.
Signed-off-by: James Morse <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2 |
|
| #
4739d53f |
| 23-May-2019 |
Ard Biesheuvel <[email protected]> |
arm64/mm: wire up CONFIG_ARCH_HAS_SET_DIRECT_MAP
Wire up the special helper functions to manipulate aliases of vmalloc regions in the linear map.
Acked-by: Will Deacon <[email protected]> Signed-off-
arm64/mm: wire up CONFIG_ARCH_HAS_SET_DIRECT_MAP
Wire up the special helper functions to manipulate aliases of vmalloc regions in the linear map.
Acked-by: Will Deacon <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
caab277b |
| 03-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 503 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1 |
|
| #
3b8c9f1c |
| 11-Jun-2018 |
Will Deacon <[email protected]> |
arm64: IPI each CPU after invalidating the I-cache for kernel mappings
When invalidating the instruction cache for a kernel mapping via flush_icache_range(), it is also necessary to flush the pipeli
arm64: IPI each CPU after invalidating the I-cache for kernel mappings
When invalidating the instruction cache for a kernel mapping via flush_icache_range(), it is also necessary to flush the pipeline for other CPUs so that instructions fetched into the pipeline before the I-cache invalidation are discarded. For example, if module 'foo' is unloaded and then module 'bar' is loaded into the same area of memory, a CPU could end up executing instructions from 'foo' when branching into 'bar' if these instructions were fetched into the pipeline before 'foo' was unloaded.
Whilst this is highly unlikely to occur in practice, particularly as any exception acts as a context-synchronizing operation, following the letter of the architecture requires us to execute an ISB on each CPU in order for the new instruction stream to be visible.
Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5 |
|
| #
5fb94e9c |
| 08-May-2018 |
Mauro Carvalho Chehab <[email protected]> |
docs: Fix some broken references
As we move stuff around, some doc references are broken. Fix some of them via this script: ./scripts/documentation-file-ref-check --fix
Manually checked if the pro
docs: Fix some broken references
As we move stuff around, some doc references are broken. Fix some of them via this script: ./scripts/documentation-file-ref-check --fix
Manually checked if the produced result is valid, removing a few false-positives.
Acked-by: Takashi Iwai <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Acked-by: Stephen Boyd <[email protected]> Acked-by: Charles Keepax <[email protected]> Acked-by: Mathieu Poirier <[email protected]> Reviewed-by: Coly Li <[email protected]> Signed-off-by: Mauro Carvalho Chehab <[email protected]> Acked-by: Jonathan Corbet <[email protected]>
show more ...
|
|
Revision tags: v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1 |
|
| #
427c896f |
| 10-Apr-2018 |
Matthew Wilcox <[email protected]> |
arm64: turn flush_dcache_mmap_lock into a no-op
ARM64 doesn't walk the VMA tree in its flush_dcache_page() implementation, so has no need to take the tree_lock.
Link: http://lkml.kernel.org/r/20180
arm64: turn flush_dcache_mmap_lock into a no-op
ARM64 doesn't walk the VMA tree in its flush_dcache_page() implementation, so has no need to take the tree_lock.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox <[email protected]> Reviewed-by: Will Deacon <[email protected]> Cc: Darrick J. Wong <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Jeff Layton <[email protected]> Cc: Ryusuke Konishi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5 |
|
| #
6ae4b6e0 |
| 07-Mar-2018 |
Shanker Donthineni <[email protected]> |
arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC
The DCache clean & ICache invalidation requirements for instructions to be data coherence are discoverable through new fields in C
arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC
The DCache clean & ICache invalidation requirements for instructions to be data coherence are discoverable through new fields in CTR_EL0. The following two control bits DIC and IDC were defined for this purpose. No need to perform point of unification cache maintenance operations from software on systems where CPU caches are transparent.
This patch optimize the three functions __flush_cache_user_range(), clean_dcache_area_pou() and invalidate_icache_range() if the hardware reports CTR_EL0.IDC and/or CTR_EL0.IDC. Basically it skips the two instructions 'DC CVAU' and 'IC IVAU', and the associated loop logic in order to avoid the unnecessary overhead.
CTR_EL0.DIC: Instruction cache invalidation requirements for instruction to data coherence. The meaning of this bit[29]. 0: Instruction cache invalidation to the point of unification is required for instruction to data coherence. 1: Instruction cache cleaning to the point of unification is not required for instruction to data coherence.
CTR_EL0.IDC: Data cache clean requirements for instruction to data coherence. The meaning of this bit[28]. 0: Data cache clean to the point of unification is required for instruction to data coherence, unless CLIDR_EL1.LoC == 0b000 or (CLIDR_EL1.LoUIS == 0b000 && CLIDR_EL1.LoUU == 0b000). 1: Data cache clean to the point of unification is not required for instruction to data coherence.
Co-authored-by: Philip Elcan <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Signed-off-by: Shanker Donthineni <[email protected]> Signed-off-by: Will Deacon <[email protected]>
show more ...
|