|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3 |
|
| #
62cffa49 |
| 12-Dec-2024 |
Ard Biesheuvel <[email protected]> |
arm64/mm: Override PARange for !LPA2 and use it consistently
When FEAT_LPA{,2} are not implemented, the ID_AA64MMFR0_EL1.PARange and TCR.IPS values corresponding with 52-bit physical addressing are
arm64/mm: Override PARange for !LPA2 and use it consistently
When FEAT_LPA{,2} are not implemented, the ID_AA64MMFR0_EL1.PARange and TCR.IPS values corresponding with 52-bit physical addressing are reserved.
Setting the TCR.IPS field to 0b110 (52-bit physical addressing) has side effects, such as how the TTBRn_ELx.BADDR fields are interpreted, and so it is important that disabling FEAT_LPA2 (by overriding the ID_AA64MMFR0.TGran fields) also presents a PARange field consistent with that.
So limit the field to 48 bits unless LPA2 is enabled, and update existing references to use the override consistently.
Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs") Cc: [email protected] Signed-off-by: Ard Biesheuvel <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3 |
|
| #
7bd8870a |
| 07-Oct-2024 |
Mark Rutland <[email protected]> |
arm64: asm-offsets: remove VMA_VM_*
The VMA_VM_MM definition is only used by the vma_vm_mm macro, which itself is unused. The VMA_VM_FLAGS definition isn't used anywhere.
Remove them all.
Signed-o
arm64: asm-offsets: remove VMA_VM_*
The VMA_VM_MM definition is only used by the vma_vm_mm macro, which itself is unused. The VMA_VM_FLAGS definition isn't used anywhere.
Remove them all.
Signed-off-by: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6 |
|
| #
080297be |
| 22-Apr-2024 |
Mark Rutland <[email protected]> |
arm64: defer clearing DAIF.D
For historical reasons we unmask debug exceptions in __cpu_setup(), but it's not necessary to unmask debug exceptions this early in the boot/idle entry paths. It would b
arm64: defer clearing DAIF.D
For historical reasons we unmask debug exceptions in __cpu_setup(), but it's not necessary to unmask debug exceptions this early in the boot/idle entry paths. It would be better to unmask debug exceptions later in C code as this simplifies the current code and will make it easier to rework exception masking logic to handle non-DAIF bits in future (e.g. PSTATE.{ALLINT,PM}).
We started clearing DAIF.D in __cpu_setup() in commit:
2ce39ad15182604b ("arm64: debug: unmask PSTATE.D earlier")
At the time, we needed to ensure that DAIF.D was clear on the primary CPU before scheduling and preemption were possible, and chose to do this in __cpu_setup() so that this occurred in the same place for primary and secondary CPUs. As we cannot handle debug exceptions this early, we placed an ISB between initializing MDSCR_EL1 and clearing DAIF.D so that no exceptions should be triggered.
Subsequently we rewrote the return-from-{idle,suspend} paths to use __cpu_setup() in commit:
cabe1c81ea5be983 ("arm64: Change cpu_resume() to enable mmu early then access sleep_sp by va")
... which allowed for earlier use of the MMU and had the desirable property of using the same code to reset the CPU in the cold and warm boot paths. This introduced a bug: DAIF.D was clear while cpu_do_resume() restored MDSCR_EL1 and other control registers (e.g. breakpoint/watchpoint control/value registers), and so we could unexpectedly take debug exceptions.
We fixed that in commit:
744c6c37cc18705d ("arm64: kernel: Fix unmasked debug exceptions when restoring mdscr_el1")
... by having cpu_do_resume() use the `disable_dbg` macro to set DAIF.D before restoring MDSCR_EL1 and other control registers. This relies on DAIF.D being subsequently cleared again in cpu_resume().
Subsequently we reworked DAIF masking in commit:
0fbeb318754860b3 ("arm64: explicitly mask all exceptions")
... where we began enforcing a policy that DAIF.D being set implies all other DAIF bits are set, and so e.g. we cannot take an IRQ while DAIF.D is set. As part of this the use of `disable_dbg` in cpu_resume() was replaced with `disable_daif` for consistency with the rest of the kernel.
These days, there's no need to clear DAIF.D early within __cpu_setup():
* setup_arch() clears DAIF.DA before scheduling and preemption are possible on the primary CPU, avoiding the problem we we originally trying to work around.
Note: DAIF.IF get cleared later when interrupts are enabled for the first time.
* secondary_start_kernel() clears all DAIF bits before scheduling and preemption are possible on secondary CPUs.
Note: with pseudo-NMI, the PMR is initialized here before any DAIF bits are cleared. Similar will be necessary for the architectural NMI.
* cpu_suspend() restores all DAIF bits when returning from idle, ensuring that we don't unexpectedly leave DAIF.D clear or set.
Note: with pseudo-NMI, the PMR is initialized here before DAIF is cleared. Similar will be necessary for the architectural NMI.
This patch removes the unmasking of debug exceptions from __cpu_setup(), relying on the above locations to initialize DAIF. This allows some other cleanups:
* It is no longer necessary for cpu_resume() to explicitly mask debug (or other) exceptions, as it is always called with all DAIF bits set. Thus we drop the use of `disable_daif`.
* The `enable_dbg` macro is no longer used, and so is dropped.
* It is no longer necessary to have an ISB immediately after initializing MDSCR_EL1 in __cpu_setup(), and we can revert to relying on the context synchronization that occurs when the MMU is enabled between __cpu_setup() and code which clears DAIF.D
Comments are added to setup_arch() and secondary_start_kernel() to explain the initial unmasking of the DAIF bits.
Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Mark Brown <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
3a2d2ca4 |
| 22-Apr-2024 |
Mark Rutland <[email protected]> |
arm64: assembler: update stale comment for disable_step_tsk
A comment in the disable_step_tsk macro refers to synchronising with enable_dbg, as historically the entry used enable_dbg to unmask debug
arm64: assembler: update stale comment for disable_step_tsk
A comment in the disable_step_tsk macro refers to synchronising with enable_dbg, as historically the entry used enable_dbg to unmask debug exceptions after disabling single-stepping.
These days the unmasking happens in entry-common.c via local_daif_restore() or local_daif_inherit(), so the comment is stale. This logic is likely to chang in future, so it would be best to avoid referring to those macros specifically.
Update the comment to take this into account, and describe it in terms of clearing DAIF.D so that it doesn't macro where this logic lives nor what it is called.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Mark Brown <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc5, v6.9-rc4 |
|
| #
b782e8d0 |
| 11-Apr-2024 |
Yicong Yang <[email protected]> |
arm64: arm_pmuv3: Correctly extract and check the PMUVer
Currently we're using "sbfx" to extract the PMUVer from ID_AA64DFR0_EL1 and skip the init/reset if no PMU present when the extracted PMUVer i
arm64: arm_pmuv3: Correctly extract and check the PMUVer
Currently we're using "sbfx" to extract the PMUVer from ID_AA64DFR0_EL1 and skip the init/reset if no PMU present when the extracted PMUVer is negative or is zero. However for PMUv3p8 the PMUVer will be 0b1000 and PMUVer extracted by "sbfx" will always be negative and we'll skip the init/reset in __init_el2_debug/reset_pmuserenr_el0 unexpectedly.
So this patch use "ubfx" instead of "sbfx" to extract the PMUVer. If the PMUVer is implementation defined (0b1111) or not implemented(0b0000) then skip the reset/init. Previously we'll also skip the init/reset if the PMUVer is higher than the version we known (currently PMUv3p9), with this patch we'll only skip if the PMU is not implemented or implementation defined. This keeps consistence with how we probe the PMU in the driver with pmuv3_implemented().
Signed-off-by: Yicong Yang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7 |
|
| #
527db67a |
| 29-Feb-2024 |
Jinjie Ruan <[email protected]> |
arm64: Remove enable_daif macro
Since commit bb8e93a287a5 ("arm64: entry: convert SError handlers to C"), the enable_daif assembler macro is no longer used anywhere, so remove it.
Signed-off-by: Ji
arm64: Remove enable_daif macro
Since commit bb8e93a287a5 ("arm64: entry: convert SError handlers to C"), the enable_daif assembler macro is no longer used anywhere, so remove it.
Signed-off-by: Jinjie Ruan <[email protected]> Reviewed-by: Mark Brown <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc6, v6.8-rc5 |
|
| #
9684ec18 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: Enable LPA2 at boot if supported by the system
Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than
arm64: Enable LPA2 at boot if supported by the system
Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than LVA (which is supported with 64k pages only), given that some page table descriptor bits change meaning in this case.
To keep the handling in asm to a minimum, the initial ID map is still created with 48-bit virtual addressing, which implies that the kernel image must be loaded into 48-bit addressable physical memory. This is currently required by the boot protocol, even though we happen to support placement outside of that for LVA/64k based configurations.
Enabling LPA2 involves more than setting TCR.T1SZ to a lower value, there is also a DS bit in TCR that needs to be set, and which changes the meaning of bits [9:8] in all page table descriptors. Since we cannot enable DS and every live page table descriptor at the same time, let's pivot through another temporary mapping. This avoids the need to reintroduce manipulations of the page tables with the MMU and caches disabled.
To permit the LPA2 feature to be overridden on the kernel command line, which may be necessary to work around silicon errata, or to deal with mismatched features on heterogeneous SoC designs, test for CPU feature overrides first, and only then enable LPA2.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
925a0eb4 |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: mm: Add LPA2 support to phys<->pte conversion routines
In preparation for enabling LPA2 support, introduce the mask values for converting between physical addresses and their representations
arm64: mm: Add LPA2 support to phys<->pte conversion routines
In preparation for enabling LPA2 support, introduce the mask values for converting between physical addresses and their representations in a page table descriptor.
While at it, move the pte_to_phys asm macro into its only user, so that we can freely modify it to use its input value register as a temp register.
For LPA2, the PTE_ADDR_MASK contains two non-adjacent sequences of zero bits, which means it no longer fits into the immediate field of an ordinary ALU instruction. So let's redefine it to include the bits in between as well, and only use it when converting from physical address to PTE representation, where the distinction does not matter. Also update the name accordingly to emphasize this.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
68aec33f |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: mm: Add feature override support for LVA
Add support for overriding the VARange field of the MMFR2 CPU ID register. This permits the associated LVA feature to be overridden early enough for t
arm64: mm: Add feature override support for LVA
Add support for overriding the VARange field of the MMFR2 CPU ID register. This permits the associated LVA feature to be overridden early enough for the boot code that creates the kernel mapping to take it into account.
Given that LPA2 implies LVA, disabling the latter should disable the former as well. So override the ID_AA64MMFR0.TGran field of the current page size as well if it advertises support for 52-bit addressing.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
84b04d3e |
| 14-Feb-2024 |
Ard Biesheuvel <[email protected]> |
arm64: kernel: Create initial ID map from C code
The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things l
arm64: kernel: Create initial ID map from C code
The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things like LPA2 or WXN more difficult than necessary. Also, it is parameterized like the rest of the MM code to run with a configurable number of levels, which is rather pointless, given that all AArch64 CPUs implement support for 48-bit virtual addressing, and that many systems exist with DRAM located outside of the 39-bit addressable range, which is the only smaller VA size that is widely used, and we need additional tricks to make things work in that combination.
So let's bite the bullet, and rip out all the asm macros, and fiddly code, and replace it with a C implementation based on the newly added routines for creating the early kernel VA mappings. And while at it, create the initial ID map based on 48-bit virtual addressing as well, regardless of the number of configured levels for the kernel proper.
Note that this code may execute with the MMU and caches disabled, and is therefore not permitted to make unaligned accesses. This shouldn't generally happen in any case for the algorithm as implemented, but to be sure, let's pass -mstrict-align to the compiler just in case.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1 |
|
| #
3931261e |
| 11-Jan-2024 |
Ard Biesheuvel <[email protected]> |
arm64: fpsimd: Bring cond_yield asm macro in line with new rules
We no longer disable softirqs or preemption when doing kernel mode SIMD, and so for fully preemptible kernels, there is no longer a n
arm64: fpsimd: Bring cond_yield asm macro in line with new rules
We no longer disable softirqs or preemption when doing kernel mode SIMD, and so for fully preemptible kernels, there is no longer a need to do any explicit yielding (and for non-preemptible kernels, yielding is not needed either).
That leaves voluntary preemption, where only explicit yield calls may result in a reschedule. To retain the existing behavior for such a configuration, we should take the new situation into account, where the preempt count will be zero rather than one, and yielding to pending softirqs is unnecessary.
Fixes: aefbab8e77eb ("arm64: fpsimd: Preserve/restore kernel mode NEON at context switch") Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3 |
|
| #
8fd7588f |
| 26-Nov-2023 |
Masahiro Yamada <[email protected]> |
arm64: replace <asm-generic/export.h> with <linux/export.h>
Commit ddb5cdbafaaa ("kbuild: generate KSYMTAB entries by modpost") deprecated <asm-generic/export.h>, which is now a wrapper of <linux/ex
arm64: replace <asm-generic/export.h> with <linux/export.h>
Commit ddb5cdbafaaa ("kbuild: generate KSYMTAB entries by modpost") deprecated <asm-generic/export.h>, which is now a wrapper of <linux/export.h>.
Replace #include <asm-generic/export.h> with #include <linux/export.h>.
Signed-off-by: Masahiro Yamada <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7 |
|
| #
d503d01e |
| 23-Nov-2022 |
Mark Brown <[email protected]> |
arm64/asm: Remove unused assembler DAIF save/restore macros
There are no longer any users of the assembler macros for saving and restoring DAIF so remove them to prevent further users being added, t
arm64/asm: Remove unused assembler DAIF save/restore macros
There are no longer any users of the assembler macros for saving and restoring DAIF so remove them to prevent further users being added, there are C equivalents available.
Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc6 |
|
| #
5b468dad |
| 17-Nov-2022 |
Anshuman Khandual <[email protected]> |
arm64/mm: Drop unused restore_ttbr1
restore_ttbr1 procedure is not used anywhere, hence just drop it.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutlan
arm64/mm: Drop unused restore_ttbr1
restore_ttbr1 procedure is not used anywhere, hence just drop it.
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Anshuman Khandual <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc5 |
|
| #
a4ee2861 |
| 07-Nov-2022 |
Anshuman Khandual <[email protected]> |
arm64/mm: Simplify and document pte_to_phys() for 52 bit addresses
pte_to_phys() assembly definition does multiple bits field transformations to derive physical address, embedded inside a page table
arm64/mm: Simplify and document pte_to_phys() for 52 bit addresses
pte_to_phys() assembly definition does multiple bits field transformations to derive physical address, embedded inside a page table entry. Unlike its C counter part i.e __pte_to_phys(), pte_to_phys() is not very apparent. It simplifies these operations via a new macro PTE_ADDR_HIGH_SHIFT indicating how far the pte encoded higher address bits need to be left shifted. While here, this also updates __pte_to_phys() and __phys_to_pte_val().
Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Brown <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: [email protected] Cc: [email protected] Reviewed-by: Ard Biesheuvel <[email protected]> Suggested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Anshuman Khandual <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc4, v6.1-rc3, v6.1-rc2 |
|
| #
e8e51041 |
| 19-Oct-2022 |
Mark Brown <[email protected]> |
arm64/asm: Remove unused enable_da macro
We no longer use the enable_da macro, remove it to avoid having to think about maintaining it.
Signed-off-by: Mark Brown <[email protected]> Link: https://
arm64/asm: Remove unused enable_da macro
We no longer use the enable_da macro, remove it to avoid having to think about maintaining it.
Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6 |
|
| #
4c0bd995 |
| 12-Sep-2022 |
Mark Rutland <[email protected]> |
arm64: alternatives: have callbacks take a cap
Today, callback alternatives are special-cased within __apply_alternatives(), and are applied alongside patching for system capabilities as ARM64_NCAPS
arm64: alternatives: have callbacks take a cap
Today, callback alternatives are special-cased within __apply_alternatives(), and are applied alongside patching for system capabilities as ARM64_NCAPS is not part of the boot_capabilities feature mask.
This special-casing is less than ideal. Giving special meaning to ARM64_NCAPS for this requires some structures and loops to use ARM64_NCAPS + 1 (AKA ARM64_NPATCHABLE), while others use ARM64_NCAPS. It's also not immediately clear callback alternatives are only applied when applying alternatives for system-wide features.
To make this a bit clearer, changes the way that callback alternatives are identified to remove the special-casing of ARM64_NCAPS, and to allow callback alternatives to be associated with a cpucap as with all other alternatives.
New cpucaps, ARM64_ALWAYS_BOOT and ARM64_ALWAYS_SYSTEM are added which are always detected alongside boot cpu capabilities and system capabilities respectively. All existing callback alternatives are made to use ARM64_ALWAYS_SYSTEM, and so will be patched at the same point during the boot flow as before.
Subsequent patches will make more use of these new cpucaps.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: James Morse <[email protected]> Cc: Joey Gouly <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc5 |
|
| #
fcf37b38 |
| 10-Sep-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In
arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64DFR0_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
c0357a73 |
| 10-Sep-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1 does not align well with kernel conventions, using as i
arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1 does not align well with kernel conventions, using as it does a lot of MixedCase in various arrangements. In preparation for automatically generating the defines for this register rename the defines used to match what is in the architecture.
Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
8f40bade |
| 05-Sep-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Standardise naming for ID_AA64MMFR2_EL1.VARange
The kernel refers to ID_AA64MMFR2_EL1.VARange as LVA. In preparation for automatic generation of defines for the system registers bring
arm64/sysreg: Standardise naming for ID_AA64MMFR2_EL1.VARange
The kernel refers to ID_AA64MMFR2_EL1.VARange as LVA. In preparation for automatic generation of defines for the system registers bring the naming used by the kernel in sync with that of DDI0487H.a. No functional change.
Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
55adc08d |
| 05-Sep-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Add _EL1 into ID_AA64PFR0_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In
arm64/sysreg: Add _EL1 into ID_AA64PFR0_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64PFR0_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
a957c6be |
| 05-Sep-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Add _EL1 into ID_AA64MMFR2_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. I
arm64/sysreg: Add _EL1 into ID_AA64MMFR2_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64MMFR2_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
| #
2d987e64 |
| 05-Sep-2022 |
Mark Brown <[email protected]> |
arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. I
arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition names
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64MMFR0_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4 |
|
| #
e4208e80 |
| 21-Jun-2022 |
Tong Tiangen <[email protected]> |
arm64: extable: move _cond_extable to _cond_uaccess_extable
Currently, We use _cond_extable for cache maintenance uaccess helper caches_clean_inval_user_pou(), so this should be moved over to EX_TYP
arm64: extable: move _cond_extable to _cond_uaccess_extable
Currently, We use _cond_extable for cache maintenance uaccess helper caches_clean_inval_user_pou(), so this should be moved over to EX_TYPE_UACCESS_ERR_ZERO and rename _cond_extable to _cond_uaccess_extable for clarity.
Suggested-by: Mark Rutland <[email protected]> Signed-off-by: Tong Tiangen <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
c0be8f18 |
| 24-Jun-2022 |
Ard Biesheuvel <[email protected]> |
arm64: head: factor out TTBR1 assignment into a macro
Create a macro load_ttbr1 to avoid having to repeat the same instruction sequence 3 times in a subsequent patch. No functional change intended.
arm64: head: factor out TTBR1 assignment into a macro
Create a macro load_ttbr1 to avoid having to repeat the same instruction sequence 3 times in a subsequent patch. No functional change intended.
Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|