|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6 |
|
| #
b69b47a6 |
| 03-Mar-2025 |
Thomas Weißschuh <[email protected]> |
arm64: Make asm/cache.h compatible with vDSO
asm/cache.h can be used during the vDSO build through vdso/cache.h. Not all definitions in it are compatible with the vDSO, especially the compat vDSO.
arm64: Make asm/cache.h compatible with vDSO
asm/cache.h can be used during the vDSO build through vdso/cache.h. Not all definitions in it are compatible with the vDSO, especially the compat vDSO.
Hide the more complex definitions from the vDSO build.
Signed-off-by: Thomas Weißschuh <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
show more ...
|
|
Revision tags: v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5 |
|
| #
d8e12a0d |
| 04-Dec-2023 |
Marc Zyngier <[email protected]> |
arm64: Kill detection of VPIPT i-cache policy
Since the kernel will never run on a system with the VPIPT i-cache policy, drop the detection code altogether.
Reviewed-by: Zenghui Yu <yuzenghui@huawe
arm64: Kill detection of VPIPT i-cache policy
Since the kernel will never run on a system with the VPIPT i-cache policy, drop the detection code altogether.
Reviewed-by: Zenghui Yu <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7 |
|
| #
9382bc44 |
| 12-Jun-2023 |
Catalin Marinas <[email protected]> |
arm64: allow kmalloc() caches aligned to the smaller cache_line_size()
On arm64, ARCH_DMA_MINALIGN is 128, larger than the cache line size on most of the current platforms (typically 64). Define AR
arm64: allow kmalloc() caches aligned to the smaller cache_line_size()
On arm64, ARCH_DMA_MINALIGN is 128, larger than the cache line size on most of the current platforms (typically 64). Define ARCH_KMALLOC_MINALIGN to 8 (the default for architectures without their own ARCH_DMA_MINALIGN) and override dma_get_cache_alignment() to return cache_line_size(), probed at run-time. The kmalloc() caches will be limited to the cache line size. This will allow the additional kmalloc-{64,192} caches on most arm64 platforms.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]> Tested-by: Isaac J. Manjarres <[email protected]> Cc: Will Deacon <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Jerry Snitselaar <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Lars-Peter Clausen <[email protected]> Cc: Logan Gunthorpe <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Mark Brown <[email protected]> Cc: Mike Snitzer <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Robin Murphy <[email protected]> Cc: Saravana Kannan <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4 |
|
| #
7af0c253 |
| 12-Jan-2023 |
Akihiko Odaki <[email protected]> |
KVM: arm64: Normalize cache configuration
Before this change, the cache configuration of the physical CPU was exposed to vcpus. This is problematic because the cache configuration a vcpu sees varies
KVM: arm64: Normalize cache configuration
Before this change, the cache configuration of the physical CPU was exposed to vcpus. This is problematic because the cache configuration a vcpu sees varies when it migrates between vcpus with different cache configurations.
Fabricate cache configuration from the sanitized value, which holds the CTR_EL0 value the userspace sees regardless of which physical CPU it resides on.
CLIDR_EL1 and CCSIDR_EL1 are now writable from the userspace so that the VMM can restore the values saved with the old kernel.
Suggested-by: Marc Zyngier <[email protected]> Signed-off-by: Akihiko Odaki <[email protected]> Link: https://lore.kernel.org/r/[email protected] [ Oliver: Squash Marc's fix for CCSIDR_EL1.LineSize when set from userspace ] Signed-off-by: Oliver Upton <[email protected]>
show more ...
|
| #
805e6ec1 |
| 12-Jan-2023 |
Akihiko Odaki <[email protected]> |
arm64/cache: Move CLIDR macro definitions
The macros are useful for KVM which needs to manage how CLIDR is exposed to vcpu so move them to include/asm/cache.h, which KVM can refer to.
Signed-off-by
arm64/cache: Move CLIDR macro definitions
The macros are useful for KVM which needs to manage how CLIDR is exposed to vcpu so move them to include/asm/cache.h, which KVM can refer to.
Signed-off-by: Akihiko Odaki <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Oliver Upton <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5 |
|
| #
d9b230f6 |
| 05-Sep-2022 |
Kristina Martsenko <[email protected]> |
arm64: cache: Remove unused CTR_CACHE_MINLINE_MASK
A recent change renamed CTR_DMINLINE_SHIFT to CTR_EL0_DminLine_SHIFT but didn't fully update CTR_CACHE_MINLINE_MASK. As CTR_CACHE_MINLINE_MASK is n
arm64: cache: Remove unused CTR_CACHE_MINLINE_MASK
A recent change renamed CTR_DMINLINE_SHIFT to CTR_EL0_DminLine_SHIFT but didn't fully update CTR_CACHE_MINLINE_MASK. As CTR_CACHE_MINLINE_MASK is not used anywhere anyway, just remove it.
Signed-off-by: Kristina Martsenko <[email protected]> Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc4, v6.0-rc3, v6.0-rc2 |
|
| #
53d2d84a |
| 18-Aug-2022 |
Mark Brown <[email protected]> |
arm64/cache: Fix cache_type_cwg() for register generation
Ard noticed that since we converted CTR_EL0 to automatic generation we have been seeing errors on some systems handling the value of cache_t
arm64/cache: Fix cache_type_cwg() for register generation
Ard noticed that since we converted CTR_EL0 to automatic generation we have been seeing errors on some systems handling the value of cache_type_cwg() such as
CPU features: No Cache Writeback Granule information, assuming 128
This is because the manual definition of CTR_EL0_CWG_MASK was done without a shift while our convention is to define the mask after shifting. This means that the user in cache_type_cwg() was broken as it was written for the manually written shift then mask. Fix this by converting to use SYS_FIELD_GET().
The only other field where the _MASK for this register is used is IminLine which is at offset 0 so unaffected.
Fixes: 9a3634d02301 ("arm64/sysreg: Convert CTR_EL0 to automatic generation") Reported-by: Ard Biesheuvel <[email protected]> Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6 |
|
| #
5b345e39 |
| 04-Jul-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Standardise naming for CTR_EL0 fields
cache.h contains some defines which are used to represent fields and enumeration values which do not follow the standard naming convention used fo
arm64/sysreg: Standardise naming for CTR_EL0 fields
cache.h contains some defines which are used to represent fields and enumeration values which do not follow the standard naming convention used for when we automatically generate defines for system registers. Update the names of the constants to reflect standardised naming and move them to sysreg.h.
There is also a helper CTR_L1IP() which was open coded and has been converted to use SYS_FIELD_GET().
Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
971f4592 |
| 04-Jul-2022 |
Mark Brown <[email protected]> |
arm64/cache: Restrict which headers are included in __ASSEMBLY__
Future changes to generate register definitions automatically will cause this header to be included in a linker script. This will mea
arm64/cache: Restrict which headers are included in __ASSEMBLY__
Future changes to generate register definitions automatically will cause this header to be included in a linker script. This will mean that headers it in turn includes that are not safe for use in such a context (eg, due to the use of assembler macros) cause build problems. Avoid these issues by moving the affected includes and associated defines to the section of the file already guarded by ifndef __ASSEMBLY__.
Suggested-by: Will Deacon <[email protected]> Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
dabb128d |
| 04-Jul-2022 |
Mark Brown <[email protected]> |
arm64/cpuinfo: Remove references to reserved cache type
In 155433cb365ee466 ("arm64: cache: Remove support for ASID-tagged VIVT I-caches") we removed all the support fir AIVIVT cache types and renam
arm64/cpuinfo: Remove references to reserved cache type
In 155433cb365ee466 ("arm64: cache: Remove support for ASID-tagged VIVT I-caches") we removed all the support fir AIVIVT cache types and renamed all references to the field to say "unknown" since support for AIVIVT caches was removed from the architecture. Some confusion has resulted since the corresponding change to the architecture left the value named as AIVIVT but documented it as reserved in v8, refactor the code so we don't define the constant instead. This will help with automatic generation of this register field since it means we care less about the correspondence with the ARM.
No functional change, the value displayed to userspace is unchanged.
Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7 |
|
| #
d949a815 |
| 10-May-2022 |
Peter Collingbourne <[email protected]> |
mm: make minimum slab alignment a runtime property
When CONFIG_KASAN_HW_TAGS is enabled we currently increase the minimum slab alignment to 16. This happens even if MTE is not supported in hardware
mm: make minimum slab alignment a runtime property
When CONFIG_KASAN_HW_TAGS is enabled we currently increase the minimum slab alignment to 16. This happens even if MTE is not supported in hardware or disabled via kasan=off, which creates an unnecessary memory overhead in those cases. Eliminate this overhead by making the minimum slab alignment a runtime property and only aligning to 16 if KASAN is enabled at runtime.
On a DragonBoard 845c (non-MTE hardware) with a kernel built with CONFIG_KASAN_HW_TAGS, waiting for quiescence after a full Android boot I see the following Slab measurements in /proc/meminfo (median of 3 reboots):
Before: 169020 kB After: 167304 kB
[[email protected]: make slab alignment type `unsigned int' to avoid casting] Link: https://linux-review.googlesource.com/id/I752e725179b43b144153f4b6f584ceb646473ead Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peter Collingbourne <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Reviewed-by: Hyeonggon Yoo <[email protected]> Tested-by: Hyeonggon Yoo <[email protected]> Acked-by: David Rientjes <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Eric W. Biederman <[email protected]> Cc: Kees Cook <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2 |
|
| #
c1132702 |
| 12-Jul-2021 |
Will Deacon <[email protected]> |
Revert "arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)"
This reverts commit 65688d2a05deb9f0671a7e2301eadbfe7e27c9e9.
Unfortunately, the original Qualcomm Kryo cores integrated into t
Revert "arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)"
This reverts commit 65688d2a05deb9f0671a7e2301eadbfe7e27c9e9.
Unfortunately, the original Qualcomm Kryo cores integrated into the MSM8996 SoC feature an L2 cache with 128-byte lines which sits above the Point of Coherency. Consequently, we must restore ARCH_DMA_MINALIGN to its former ugly self so that non-coherent DMA can be performed safely on devices built using this SoC.
Thanks to Jeffrey Hugo for confirming this with a hardware designer.
Link: https://lore.kernel.org/r/CAOCk7NqdpUZFMSXfGjw0_1NaSK5gyTLgpS9kSdZn1jmBy-QkfA@mail.gmail.com/ Reported-by: Yassine Oudjana <[email protected]> Link: https://lore.kernel.org/r/uHgsRacR8hJ7nW-I-pIcehzg-lNIn7NJvaL7bP9tfAftFsBjsgaY2qTjG9zyBgxHkjNL1WPNrD7YVv2JVD2_Wy-a5VTbcq-1xEi8ZnwrXBo=@protonmail.com Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4 |
|
| #
65688d2a |
| 27-May-2021 |
Will Deacon <[email protected]> |
arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)
Back in 97303480753e ("arm64: Increase the max granular size"), ARCH_DMA_MINALIGN was effectively increased to 128 bytes thanks to an inc
arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)
Back in 97303480753e ("arm64: Increase the max granular size"), ARCH_DMA_MINALIGN was effectively increased to 128 bytes thanks to an increase in L1_CACHE_BYTES due to an unsubstantiated performance claim on the now obsolete ThunderX-1. Although this was reverted in d93277b9839b, ARCH_DMA_MINALIGN was kept at 128 bytes by ebc7e21e0fa2 ("arm64: Increase ARCH_DMA_MINALIGN to 128").
During discussion of the original patch, it was reported that the change also prevented a warning during boot on (again, now obsolete) Qualcomm server hardware where the cache writeback granule was larger than 64 bytes. The reason for this warning was because non-coherent DMA could lead to data corruption due to unexpected writeback from the CPU where a cacheline is shared with other allocations.
Since then, systems have appeared with larger cachelines still, and so commit 8f5c9037a55b ("arm64/mm: Correct the cache line size warning with non coherent device") reworked the warning so that it only appears on systems where non-coherent DMA is actually required and taints the kernel with TAINT_CPU_OUT_OF_SPEC. We are not aware of any systems, even including the aforementioned obsolete machines, which have a CWG larger than 64 bytes and require non-coherent DMA.
More recently, it has been reported that a ARCH_DMA_MINALIGN of 128 bytes wastes considerable memory (~6% immediately after boot on one system).
Reduce ARCH_DMA_MINALIGN to 64 bytes and allow the warning/taint to indicate if there are machines that unknowingly rely on this.
Cc: Catalin Marinas <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Vincent Whitchurch <[email protected]> Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/ Link: https://lore.kernel.org/linux-arm-kernel/CAOZdJXUiRMAguDV+HEJqPg57MyBNqEcTyaH+ya=U93NHb-pdJA@mail.gmail.com/ Link: https://lore.kernel.org/linux-arm-kernel/[email protected]/ Link: https://lore.kernel.org/r/[email protected] Acked-by: Catalin Marinas <[email protected]> Acked-by: Mark Rutland <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Acked-by: Mark Rutland <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse |
|
| #
2cb34276 |
| 26-Feb-2021 |
Andrey Konovalov <[email protected]> |
arm64: kasan: simplify and inline MTE functions
This change provides a simpler implementation of mte_get_mem_tag(), mte_get_random_tag(), and mte_set_mem_tag_range().
Simplifications include removi
arm64: kasan: simplify and inline MTE functions
This change provides a simpler implementation of mte_get_mem_tag(), mte_get_random_tag(), and mte_set_mem_tag_range().
Simplifications include removing system_supports_mte() checks as these functions are onlye called from KASAN runtime that had already checked system_supports_mte(). Besides that, size and address alignment checks are removed from mte_set_mem_tag_range(), as KASAN now does those.
This change also moves these functions into the asm/mte-kasan.h header and implements mte_set_mem_tag_range() via inline assembly to avoid unnecessary functions calls.
[[email protected]: fix warning in mte_get_random_tag()] Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/a26121b294fdf76e369cb7a74351d1c03a908930.1612546384.git.andreyknvl@google.com Co-developed-by: Vincenzo Frascino <[email protected]> Signed-off-by: Vincenzo Frascino <[email protected]> Signed-off-by: Andrey Konovalov <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Branislav Rankov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Evgenii Stepanov <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Marco Elver <[email protected]> Cc: Peter Collingbourne <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1 |
|
| #
dc09b29f |
| 22-Dec-2020 |
Andrey Konovalov <[email protected]> |
arm64: kasan: align allocations for HW_TAGS
Hardware tag-based KASAN uses the memory tagging approach, which requires all allocations to be aligned to the memory granule size. Align the allocations
arm64: kasan: align allocations for HW_TAGS
Hardware tag-based KASAN uses the memory tagging approach, which requires all allocations to be aligned to the memory granule size. Align the allocations to MTE_GRANULE_SIZE via ARCH_SLAB_MINALIGN when CONFIG_KASAN_HW_TAGS is enabled.
Link: https://lkml.kernel.org/r/fe64131606b1c2aabfd34ae99554c0d9df18eb19.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Signed-off-by: Vincenzo Frascino <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Tested-by: Vincenzo Frascino <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Branislav Rankov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Evgenii Stepanov <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Marco Elver <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2 |
|
| #
332576e6 |
| 26-Oct-2020 |
Arnd Bergmann <[email protected]> |
arm64: avoid -Woverride-init warning
The icache_policy_str[] definition causes a warning when extra warning flags are enabled:
arch/arm64/kernel/cpuinfo.c:38:26: warning: initialized field overwrit
arm64: avoid -Woverride-init warning
The icache_policy_str[] definition causes a warning when extra warning flags are enabled:
arch/arm64/kernel/cpuinfo.c:38:26: warning: initialized field overwritten [-Woverride-init] 38 | [ICACHE_POLICY_VIPT] = "VIPT", | ^~~~~~ arch/arm64/kernel/cpuinfo.c:38:26: note: (near initialization for 'icache_policy_str[2]') arch/arm64/kernel/cpuinfo.c:39:26: warning: initialized field overwritten [-Woverride-init] 39 | [ICACHE_POLICY_PIPT] = "PIPT", | ^~~~~~ arch/arm64/kernel/cpuinfo.c:39:26: note: (near initialization for 'icache_policy_str[3]') arch/arm64/kernel/cpuinfo.c:40:27: warning: initialized field overwritten [-Woverride-init] 40 | [ICACHE_POLICY_VPIPT] = "VPIPT", | ^~~~~~~ arch/arm64/kernel/cpuinfo.c:40:27: note: (near initialization for 'icache_policy_str[0]')
There is no real need for the default initializer here, as printing a NULL string is harmless. Rewrite the logic to have an explicit reserved value for the only one that uses the default value.
This partially reverts the commit that removed ICACHE_POLICY_AIVIVT.
Fixes: 155433cb365e ("arm64: cache: Remove support for ASID-tagged VIVT I-caches") Signed-off-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc1 |
|
| #
33def849 |
| 22-Oct-2020 |
Joe Perches <[email protected]> |
treewide: Convert macro and uses of __section(foo) to __section("foo")
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences.
Remove the q
treewide: Convert macro and uses of __section(foo) to __section("foo")
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences.
Remove the quote operator # from compiler_attributes.h __section macro.
Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms.
Conversion done using the script at:
https://lore.kernel.org/lkml/[email protected]/2-convert_section.pl
Signed-off-by: Joe Perches <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Miguel Ojeda <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3 |
|
| #
e43f1331 |
| 20-Feb-2020 |
James Morse <[email protected]> |
arm64: Ask the compiler to __always_inline functions used by KVM at HYP
KVM uses some of the static-inline helpers like icache_is_vipt() from its HYP code. This assumes the function is inlined so th
arm64: Ask the compiler to __always_inline functions used by KVM at HYP
KVM uses some of the static-inline helpers like icache_is_vipt() from its HYP code. This assumes the function is inlined so that the code is mapped to EL2. The compiler may decide not to inline these, and the out-of-line version may not be in the __hyp_text section.
Add the additional __always_ hint to these static-inlines that are used by KVM.
Signed-off-by: James Morse <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4 |
|
| #
ee9d90be |
| 17-Oct-2019 |
James Morse <[email protected]> |
arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419
Systems affected by Neoverse-N1 #1542419 support DIC so do not need to perform icache maintenance once new instructions are
arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419
Systems affected by Neoverse-N1 #1542419 support DIC so do not need to perform icache maintenance once new instructions are cleaned to the PoU. For the errata workaround, the kernel hides DIC from user-space, so that the unnecessary cache maintenance can be trapped by firmware.
To reduce the number of traps, produce a fake IminLine value based on PAGE_SIZE.
Signed-off-by: James Morse <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5 |
|
| #
80d83812 |
| 12-Aug-2019 |
Nick Desaulniers <[email protected]> |
arm64: prefer __section from compiler_attributes.h
GCC unescapes escaped string section names while Clang does not. Because __section uses the `#` stringification operator for the section name, it d
arm64: prefer __section from compiler_attributes.h
GCC unescapes escaped string section names while Clang does not. Because __section uses the `#` stringification operator for the section name, it doesn't need to be escaped.
This antipattern was found with: $ grep -e __section\(\" -e __section__\(\" -r
Reported-by: Sedat Dilek <[email protected]> Suggested-by: Josh Poimboeuf <[email protected]> Signed-off-by: Nick Desaulniers <[email protected]> Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4 |
|
| #
caab277b |
| 03-Jun-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 503 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
| #
8f5c9037 |
| 14-Jun-2019 |
Masayoshi Mizuma <[email protected]> |
arm64/mm: Correct the cache line size warning with non coherent device
If the cache line size is greater than ARCH_DMA_MINALIGN (128), the warning shows and it's tainted as TAINT_CPU_OUT_OF_SPEC.
H
arm64/mm: Correct the cache line size warning with non coherent device
If the cache line size is greater than ARCH_DMA_MINALIGN (128), the warning shows and it's tainted as TAINT_CPU_OUT_OF_SPEC.
However, it's not good because as discussed in the thread [1], the cpu cache line size will be problem only on non-coherent devices.
Since the coherent flag is already introduced to struct device, show the warning only if the device is non-coherent device and ARCH_DMA_MINALIGN is smaller than the cpu cache size.
[1] https://lore.kernel.org/linux-arm-kernel/20180514145703.celnlobzn3uh5tc2@localhost/
Signed-off-by: Masayoshi Mizuma <[email protected]> Reviewed-by: Hidetoshi Seto <[email protected]> Tested-by: Zhang Lei <[email protected]> [[email protected]: removed 'if' block for WARN_TAINT] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc3 |
|
| #
7b8c87b2 |
| 28-May-2019 |
Shaokun Zhang <[email protected]> |
arm64: cacheinfo: Update cache_line_size detected from DT or PPTT
cache_line_size is derived from CTR_EL0.CWG field and is called mostly for I/O device drivers. For some platforms like the HiSilicon
arm64: cacheinfo: Update cache_line_size detected from DT or PPTT
cache_line_size is derived from CTR_EL0.CWG field and is called mostly for I/O device drivers. For some platforms like the HiSilicon Kunpeng920 server SoC, cache line sizes are different between L1/2 cache and L3 cache while L1 cache line size is 64-byte and L3 is 128-byte, but CTR_EL0.CWG is misreporting using L1 cache line size.
We shall correct the right value which is important for I/O performance. Let's update the cache line size if it is detected from DT or PPTT information.
Cc: Will Deacon <[email protected]> Cc: Jeremy Linton <[email protected]> Cc: Zhenfa Qiu <[email protected]> Reported-by: Zhenfa Qiu <[email protected]> Suggested-by: Catalin Marinas <[email protected]> Reviewed-by: Sudeep Holla <[email protected]> Signed-off-by: Shaokun Zhang <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2 |
|
| #
7fa1e2e6 |
| 11-Jan-2019 |
Andrey Konovalov <[email protected]> |
kasan, arm64: remove redundant ARCH_SLAB_MINALIGN define
Defining ARCH_SLAB_MINALIGN in arch/arm64/include/asm/cache.h when KASAN is off is not needed, as it is defined in defined in include/linux/s
kasan, arm64: remove redundant ARCH_SLAB_MINALIGN define
Defining ARCH_SLAB_MINALIGN in arch/arm64/include/asm/cache.h when KASAN is off is not needed, as it is defined in defined in include/linux/slab.h as ifndef.
Signed-off-by: Andrey Konovalov <[email protected]> Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
eb214f2d |
| 08-Jan-2019 |
Andrey Konovalov <[email protected]> |
kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN
kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
Link: http://lkml.kernel.org/r/52ddd881916bcc153a9924c154daacde78522227.1546540962.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Suggested-by: Vincenzo Frascino <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|