|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7 |
|
| #
2fd4e52e |
| 04-Jul-2024 |
Thorsten Blum <[email protected]> |
parisc: Use max() to calculate parisc_tlb_flush_threshold
Use max() to reduce 4 lines of code to a single line and improve its readability.
Fixes the following Coccinelle/coccicheck warning reporte
parisc: Use max() to calculate parisc_tlb_flush_threshold
Use max() to reduce 4 lines of code to a single line and improve its readability.
Fixes the following Coccinelle/coccicheck warning reported by minmax.cocci:
WARNING opportunity for max()
Signed-off-by: Thorsten Blum <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc6, v6.10-rc5, v6.10-rc4 |
|
| #
72d95924 |
| 10-Jun-2024 |
John David Anglin <[email protected]> |
parisc: Try to fix random segmentation faults in package builds
PA-RISC systems with PA8800 and PA8900 processors have had problems with random segmentation faults for many years. Systems with earl
parisc: Try to fix random segmentation faults in package builds
PA-RISC systems with PA8800 and PA8900 processors have had problems with random segmentation faults for many years. Systems with earlier processors are much more stable.
Systems with PA8800 and PA8900 processors have a large L2 cache which needs per page flushing for decent performance when a large range is flushed. The combined cache in these systems is also more sensitive to non-equivalent aliases than the caches in earlier systems.
The majority of random segmentation faults that I have looked at appear to be memory corruption in memory allocated using mmap and malloc.
My first attempt at fixing the random faults didn't work. On reviewing the cache code, I realized that there were two issues which the existing code didn't handle correctly. Both relate to cache move-in. Another issue is that the present bit in PTEs is racy.
1) PA-RISC caches have a mind of their own and they can speculatively load data and instructions for a page as long as there is a entry in the TLB for the page which allows move-in. TLBs are local to each CPU. Thus, the TLB entry for a page must be purged before flushing the page. This is particularly important on SMP systems.
In some of the flush routines, the flush routine would be called and then the TLB entry would be purged. This was because the flush routine needed the TLB entry to do the flush.
2) My initial approach to trying the fix the random faults was to try and use flush_cache_page_if_present for all flush operations. This actually made things worse and led to a couple of hardware lockups. It finally dawned on me that some lines weren't being flushed because the pte check code was racy. This resulted in random inequivalent mappings to physical pages.
The __flush_cache_page tmpalias flush sets up its own TLB entry and it doesn't need the existing TLB entry. As long as we can find the pte pointer for the vm page, we can get the pfn and physical address of the page. We can also purge the TLB entry for the page before doing the flush. Further, __flush_cache_page uses a special TLB entry that inhibits cache move-in.
When switching page mappings, we need to ensure that lines are removed from the cache. It is not sufficient to just flush the lines to memory as they may come back.
This made it clear that we needed to implement all the required flush operations using tmpalias routines. This includes flushes for user and kernel pages.
After modifying the code to use tmpalias flushes, it became clear that the random segmentation faults were not fully resolved. The frequency of faults was worse on systems with a 64 MB L2 (PA8900) and systems with more CPUs (rp4440).
The warning that I added to flush_cache_page_if_present to detect pages that couldn't be flushed triggered frequently on some systems.
Helge and I looked at the pages that couldn't be flushed and found that the PTE was either cleared or for a swap page. Ignoring pages that were swapped out seemed okay but pages with cleared PTEs seemed problematic.
I looked at routines related to pte_clear and noticed ptep_clear_flush. The default implementation just flushes the TLB entry. However, it was obvious that on parisc we need to flush the cache page as well. If we don't flush the cache page, stale lines will be left in the cache and cause random corruption. Once a PTE is cleared, there is no way to find the physical address associated with the PTE and flush the associated page at a later time.
I implemented an updated change with a parisc specific version of ptep_clear_flush. It fixed the random data corruption on Helge's rp4440 and rp3440, as well as on my c8000.
At this point, I realized that I could restore the code where we only flush in flush_cache_page_if_present if the page has been accessed. However, for this, we also need to flush the cache when the accessed bit is cleared in ptep_clear_flush_young to keep things synchronized. The default implementation only flushes the TLB entry.
Other changes in this version are:
1) Implement parisc specific version of ptep_get. It's identical to default but needed in arch/parisc/include/asm/pgtable.h. 2) Revise parisc implementation of ptep_test_and_clear_young to use ptep_get (READ_ONCE). 3) Drop parisc implementation of ptep_get_and_clear. We can use default. 4) Revise flush_kernel_vmap_range and invalidate_kernel_vmap_range to use full data cache flush. 5) Move flush_cache_vmap and flush_cache_vunmap to cache.c. Handle VM_IOREMAP case in flush_cache_vmap.
At this time, I don't know whether it is better to always flush when the PTE present bit is set or when both the accessed and present bits are set. The later saves flushing pages that haven't been accessed, but we need to flush in ptep_clear_flush_young. It also needs a page table lookup to find the PTE pointer. The lpa instruction only needs a page table lookup when the PTE entry isn't in the TLB.
We don't atomically handle setting and clearing the _PAGE_ACCESSED bit. If we miss an update, we may miss a flush and the cache may get corrupted. Whether the current code is effectively atomic depends on process control.
When CONFIG_FLUSH_PAGE_ACCESSED is set to zero, the page will eventually be flushed when the PTE is cleared or in flush_cache_page_if_present. The _PAGE_ACCESSED bit is not used, so the problem is avoided.
The flush method can be selected using the CONFIG_FLUSH_PAGE_ACCESSED define in cache.c. The default is 0. I didn't see a large difference in performance.
Signed-off-by: John David Anglin <[email protected]> Cc: <[email protected]> # v6.6+ Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3 |
|
| #
913b9d44 |
| 31-Jan-2024 |
Helge Deller <[email protected]> |
parisc: BTLB: Fix crash when setting up BTLB at CPU bringup
When using hotplug and bringing up a 32-bit CPU, ask the firmware about the BTLB information to set up the static (block) TLB entries.
Fo
parisc: BTLB: Fix crash when setting up BTLB at CPU bringup
When using hotplug and bringing up a 32-bit CPU, ask the firmware about the BTLB information to set up the static (block) TLB entries.
For that write access to the static btlb_info struct is needed, but since it is marked __ro_after_init the kernel segfaults with missing write permissions.
Fix the crash by dropping the __ro_after_init annotation.
Fixes: e5ef93d02d6c ("parisc: BTLB: Initialize BTLB tables at CPU startup") Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]> # v6.6+
show more ...
|
|
Revision tags: v6.8-rc2, v6.8-rc1 |
|
| #
8b1d7239 |
| 20-Jan-2024 |
Helge Deller <[email protected]> |
parisc: Fix random data corruption from exception handler
The current exception handler implementation, which assists when accessing user space memory, may exhibit random data corruption if the comp
parisc: Fix random data corruption from exception handler
The current exception handler implementation, which assists when accessing user space memory, may exhibit random data corruption if the compiler decides to use a different register than the specified register %r29 (defined in ASM_EXCEPTIONTABLE_REG) for the error code. If the compiler choose another register, the fault handler will nevertheless store -EFAULT into %r29 and thus trash whatever this register is used for. Looking at the assembly I found that this happens sometimes in emulate_ldd().
To solve the issue, the easiest solution would be if it somehow is possible to tell the fault handler which register is used to hold the error code. Using %0 or %1 in the inline assembly is not posssible as it will show up as e.g. %r29 (with the "%r" prefix), which the GNU assembler can not convert to an integer.
This patch takes another, better and more flexible approach: We extend the __ex_table (which is out of the execution path) by one 32-word. In this word we tell the compiler to insert the assembler instruction "or %r0,%r0,%reg", where %reg references the register which the compiler choosed for the error return code. In case of an access failure, the fault handler finds the __ex_table entry and can examine the opcode. The used register is encoded in the lowest 5 bits, and the fault handler can then store -EFAULT into this register.
Since we extend the __ex_table to 3 words we can't use the BUILDTIME_TABLE_SORT config option any longer.
Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]> # v6.0+
show more ...
|
| #
b9402e3b |
| 17-Jan-2024 |
Helge Deller <[email protected]> |
parisc: Check for valid stride size for cache flushes
Report if the calculated cache stride size is zero, otherwise the cache flushing routine will never finish and hang the machine. This can be rep
parisc: Check for valid stride size for cache flushes
Report if the calculated cache stride size is zero, otherwise the cache flushing routine will never finish and hang the machine. This can be reproduced with a testcase in qemu, where the firmware reports wrong cache values.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1 |
|
| #
e5ef93d0 |
| 07-Sep-2023 |
Helge Deller <[email protected]> |
parisc: BTLB: Initialize BTLB tables at CPU startup
Initialize the BTLB entries when starting up a CPU. Note that BTLBs are not available on 64-bit CPUs.
Signed-off-by: Helge Deller <[email protected]>
|
|
Revision tags: v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
e70bbca6 |
| 02-Aug-2023 |
Matthew Wilcox (Oracle) <[email protected]> |
parisc: implement the new page table range API
Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being pe
parisc: implement the new page table range API
Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Mike Rapoport (IBM) <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Helge Deller <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4 |
|
| #
c6d96328 |
| 26-May-2023 |
Helge Deller <[email protected]> |
parisc: Add cacheflush() syscall
Signed-off-by: Helge Deller <[email protected]>
|
| #
6a2561f9 |
| 08-Jun-2023 |
Hugh Dickins <[email protected]> |
parisc: add pte_unmap() to balance get_ptep()
To keep balance in future, remember to pte_unmap() after a successful get_ptep(). And act as if flush_cache_pages() really needs a map there, to read t
parisc: add pte_unmap() to balance get_ptep()
To keep balance in future, remember to pte_unmap() after a successful get_ptep(). And act as if flush_cache_pages() really needs a map there, to read the pfn before "unmapping", to be sure page table is not removed.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Hugh Dickins <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: Aneesh Kumar K.V <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Claudio Imbrenda <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: John David Anglin <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport (IBM) <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Qi Zheng <[email protected]> Cc: Russell King <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
61e150fb |
| 24-May-2023 |
Helge Deller <[email protected]> |
parisc: Fix flush_dcache_page() for usage from irq context
Since at least kernel 6.1, flush_dcache_page() is called with IRQs disabled, e.g. from aio_complete().
But the current implementation for
parisc: Fix flush_dcache_page() for usage from irq context
Since at least kernel 6.1, flush_dcache_page() is called with IRQs disabled, e.g. from aio_complete().
But the current implementation for flush_dcache_page() on parisc unintentionally re-enables IRQs, which may lead to deadlocks.
Fix it by using xa_lock_irqsave() and xa_unlock_irqrestore() for the flush_dcache_mmap_*lock() macros instead.
Cc: [email protected] Cc: [email protected] # 5.18+ Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5 |
|
| #
70fa2031 |
| 06-Sep-2022 |
Matthew Wilcox (Oracle) <[email protected]> |
parisc: remove mmap linked list from cache handling
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Or
parisc: remove mmap linked list from cache handling
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Liam R. Howlett <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Reviewed-by: Davidlohr Bueso <[email protected]> Tested-by: Yu Zhao <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: David Howells <[email protected]> Cc: SeongJae Park <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8 |
|
| #
3fbc9a7d |
| 19-Jul-2022 |
Helge Deller <[email protected]> |
parisc: Drop pa_swapper_pg_lock spinlock
This spinlock was dropped with commit b7795074a046 ("parisc: Optimize per-pagetable spinlocks") in kernel v5.12.
Remove it to silence a sparse warning.
Sig
parisc: Drop pa_swapper_pg_lock spinlock
This spinlock was dropped with commit b7795074a046 ("parisc: Optimize per-pagetable spinlocks") in kernel v5.12.
Remove it to silence a sparse warning.
Signed-off-by: Helge Deller <[email protected]> Reported-by: kernel test robot <[email protected]> Cc: <[email protected]> # v5.12+
show more ...
|
|
Revision tags: v5.19-rc7, v5.19-rc6 |
|
| #
39ade048 |
| 06-Jul-2022 |
Fabio M. De Francesco <[email protected]> |
highmem: Make __kunmap_{local,atomic}() take const void pointer
__kunmap_ {local,atomic}() currently take pointers to void. However, this is semantically incorrect, since these functions do not chan
highmem: Make __kunmap_{local,atomic}() take const void pointer
__kunmap_ {local,atomic}() currently take pointers to void. However, this is semantically incorrect, since these functions do not change the memory their arguments point to.
Therefore, make this semantics explicit by modifying the __kunmap_{local,atomic}() prototypes to take pointers to const void.
As a side effect, compilers may produce more efficient code.
Acked-by: Andrew Morton <[email protected]> Acked-by: Helge Deller <[email protected]> # parisc Suggested-by: David Sterba <[email protected]> Suggested-by: Ira Weiny <[email protected]> Reviewed-by: Ira Weiny <[email protected]> Signed-off-by: Fabio M. De Francesco <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc5, v5.19-rc4, v5.19-rc3 |
|
| #
e9ed22e6 |
| 18-Jun-2022 |
John David Anglin <[email protected]> |
parisc: Fix flush_anon_page on PA8800/PA8900
Anonymous pages are allocated with the shared mappings colouring, SHM_COLOUR. Since the alias boundary on machines with PA8800 and PA8900 processors is u
parisc: Fix flush_anon_page on PA8800/PA8900
Anonymous pages are allocated with the shared mappings colouring, SHM_COLOUR. Since the alias boundary on machines with PA8800 and PA8900 processors is unknown, flush_user_cache_page() might not flush all mappings of a shared anonymous page. Flushing the whole data cache flushes all mappings.
This won't fix all coherency issues with shared mappings but it seems to work well in practice. I haven't seen any random memory faults in almost a month on a rp3440 running as a debian buildd machine.
There is a small preformance hit.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]> Cc: [email protected] # v5.18+
show more ...
|
|
Revision tags: v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1 |
|
| #
1fc7db24 |
| 30-Mar-2022 |
John David Anglin <[email protected]> |
parisc: Don't enforce DMA completion order in cache flushes
The only place we need to ensure all outstanding cache coherence operations are complete is in invalidate_kernel_vmap_range. All parisc dr
parisc: Don't enforce DMA completion order in cache flushes
The only place we need to ensure all outstanding cache coherence operations are complete is in invalidate_kernel_vmap_range. All parisc drivers synchronize DMA operations internally and do not call invalidate_kernel_vmap_range. We only need this for non-coherent I/O operations.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
2de8b4cc |
| 16-May-2022 |
John David Anglin <[email protected]> |
parisc: Rewrite cache flush code for PA8800/PA8900
Originally, I was convinced that we needed to use tmpalias flushes everwhere, for both user and kernel flushes. However, when I modified flush_kern
parisc: Rewrite cache flush code for PA8800/PA8900
Originally, I was convinced that we needed to use tmpalias flushes everwhere, for both user and kernel flushes. However, when I modified flush_kernel_dcache_page_addr, to use a tmpalias flush, my c8000 would crash quite early when booting.
The PDC returns alias values of 0 for the icache and dcache. This indicates that either the alias boundary is greater than 16MB or equivalent aliasing doesn't work. I modified the tmpalias code to make it easy to try alternate boundaries. I tried boundaries up to 128MB but still kernel tmpalias flushes didn't work on c8000.
This led me to conclude that tmpalias flushes don't work on PA8800 and PA8900 machines, and that we needed to flush directly using the virtual address of user and kernel pages. This is likely the major cause of instability on the c8000 and rp34xx machines.
Flushing user pages requires doing a temporary context switch as we have to flush pages that don't belong to the current context. Further, we have to deal with pages that aren't present. If a page isn't present, the flush instructions fault on every line.
Other code has been rearranged and simplified based on testing. For example, I introduced a flush_cache_dup_mm routine. flush_cache_mm and flush_cache_dup_mm differ in that flush_cache_mm calls purge_cache_pages and flush_cache_dup_mm calls flush_cache_pages. In some implementations, pdc is more efficient than fdc. Based on my testing, I don't believe there's any performance benefit on the c8000.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
ba0c0410 |
| 08-May-2022 |
Helge Deller <[email protected]> |
Revert "parisc: Increase parisc_cache_flush_threshold setting"
This reverts commit a58e9d0984e8dad53f17ec73ae3c1cc7f8d88151.
Triggers segfaults with 32-bit kernels on PA8500 machines.
Signed-off-b
Revert "parisc: Increase parisc_cache_flush_threshold setting"
This reverts commit a58e9d0984e8dad53f17ec73ae3c1cc7f8d88151.
Triggers segfaults with 32-bit kernels on PA8500 machines.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
beb48dfd |
| 26-Mar-2022 |
Helge Deller <[email protected]> |
parisc: Move CPU startup-related functions into .text section
If CONFIG_HOTPLUG_CPU is enabled, those functions will be run again after bootup. So they need to reside in the .text section.
Signed-o
parisc: Move CPU startup-related functions into .text section
If CONFIG_HOTPLUG_CPU is enabled, those functions will be run again after bootup. So they need to reside in the .text section.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
08a491b2 |
| 27-Mar-2022 |
Helge Deller <[email protected]> |
Revert "parisc: Fix invalidate/flush vmap routines"
This reverts commit 53d862fac4a09b9c56cca0433fa9de5732fd05a1.
It turned out that flush_kernel_vmap_range() is being called with interrupts disabl
Revert "parisc: Fix invalidate/flush vmap routines"
This reverts commit 53d862fac4a09b9c56cca0433fa9de5732fd05a1.
It turned out that flush_kernel_vmap_range() is being called with interrupts disabled. There's no way to flush entire cache with interrupts disabled.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.17 |
|
| #
53d862fa |
| 19-Mar-2022 |
John David Anglin <[email protected]> |
parisc: Fix invalidate/flush vmap routines
Cache move-in for virtual accesses is controlled by the TLB. Thus, we must generally purge TLB entries before flushing. The flush routines must use TLB e
parisc: Fix invalidate/flush vmap routines
Cache move-in for virtual accesses is controlled by the TLB. Thus, we must generally purge TLB entries before flushing. The flush routines must use TLB entries that inhibit cache move-in.
V2: Load physical address prior to flushing TLB. In flush_cache_page, flush TLB when flushing and purging.
V3: Don't flush when start equals end.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
411fadd6 |
| 18-Mar-2022 |
Helge Deller <[email protected]> |
parisc: Avoid flushing cache on cache-less machines
Avoid flushing caches in __flush_cache_page() and __purge_cache_page() if the machine hasn't data or instruction caches - as e.g. in qemu.
Signed
parisc: Avoid flushing cache on cache-less machines
Avoid flushing caches in __flush_cache_page() and __purge_cache_page() if the machine hasn't data or instruction caches - as e.g. in qemu.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc8 |
|
| #
0a575497 |
| 12-Mar-2022 |
Helge Deller <[email protected]> |
parisc: Avoid calling SMP cache flush functions on cache-less machines
At least the qemu virtual machine does not provide D- and I-caches, so skip triggering SMP irqs to flush caches on such machine
parisc: Avoid calling SMP cache flush functions on cache-less machines
At least the qemu virtual machine does not provide D- and I-caches, so skip triggering SMP irqs to flush caches on such machines.
Further optimize the caching code by using static branches and making some functions static.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
a58e9d09 |
| 11-Mar-2022 |
John David Anglin <[email protected]> |
parisc: Increase parisc_cache_flush_threshold setting
In testing the "Fix non-access data TLB cache flush faults" change, I noticed a significant improvement in glibc build and check times. This led
parisc: Increase parisc_cache_flush_threshold setting
In testing the "Fix non-access data TLB cache flush faults" change, I noticed a significant improvement in glibc build and check times. This led me to investigate the parisc_cache_flush_threshold setting. It determines when we switch from line flushing to whole cache flushing.
It turned out that the parisc_cache_flush_threshold setting on mako and mako2 machines (PA8800 and PA8900 processors) was way too small. Adjusting this setting provided almost a factor two improvement in the glibc build and check time.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc7, v5.17-rc6, v5.17-rc5 |
|
| #
360bd6c6 |
| 17-Feb-2022 |
Helge Deller <[email protected]> |
parisc: Use constants to encode the space registers like SR_KERNEL
Use the provided space register constants instead of hardcoded values.
Signed-off-by: Helge Deller <[email protected]>
|
|
Revision tags: v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5 |
|
| #
df24e178 |
| 08-Dec-2021 |
Helge Deller <[email protected]> |
parisc: Add vDSO support
Add minimal vDSO support, which provides the signal trampoline helpers, but none of the userspace syscall helpers like time wrappers.
The big benefit of this vDSO implement
parisc: Add vDSO support
Add minimal vDSO support, which provides the signal trampoline helpers, but none of the userspace syscall helpers like time wrappers.
The big benefit of this vDSO implementation is, that we now don't need an executeable stack any longer. PA-RISC is one of the last architectures where an executeable stack was needed in oder to implement the signal trampolines by putting assembly instructions on the stack which then gets executed. Instead the kernel will provide the relevant code in the vDSO page and only put the pointers to the signal information on the stack.
By dropping the need for executable stacks we avoid running into issues with applications which want non executable stacks for security reasons. Additionally, alternative stacks on memory areas without exec permissions are supported too.
This code is based on an initial implementation by Randolph Chung from 2006: https://lore.kernel.org/linux-parisc/[email protected]/
I did the porting and lifted the code to current code base. Dave fixed the unwind code so that gdb and glibc are able to backtrace through the code. An additional patch to gdb will be pushed upstream by Dave.
Signed-off-by: Helge Deller <[email protected]> Signed-off-by: Dave Anglin <[email protected]> Cc: Randolph Chung <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|