|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1 |
|
| #
8f65aa32 |
| 22-May-2024 |
Bang Li <[email protected]> |
mm: implement update_mmu_tlb() using update_mmu_tlb_range()
Let's make update_mmu_tlb() simply a generic wrapper around update_mmu_tlb_range(). Only the latter can now be overridden by the architec
mm: implement update_mmu_tlb() using update_mmu_tlb_range()
Let's make update_mmu_tlb() simply a generic wrapper around update_mmu_tlb_range(). Only the latter can now be overridden by the architecture. We can now remove __HAVE_ARCH_UPDATE_MMU_TLB as well.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Bang Li <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Lance Yang <[email protected]> Cc: Max Filippov <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
23b1b44e |
| 22-May-2024 |
Bang Li <[email protected]> |
mm: add update_mmu_tlb_range()
Patch series "Add update_mmu_tlb_range() to simplify code", v4.
This series of commits mainly adds the update_mmu_tlb_range() to batch update tlb in an address range
mm: add update_mmu_tlb_range()
Patch series "Add update_mmu_tlb_range() to simplify code", v4.
This series of commits mainly adds the update_mmu_tlb_range() to batch update tlb in an address range and implement update_mmu_tlb() using update_mmu_tlb_range().
After commit 19eaf44954df ("mm: thp: support allocation of anonymous multi-size THP"), We may need to batch update tlb of a certain address range by calling update_mmu_tlb() in a loop. Using the update_mmu_tlb_range(), we can simplify the code and possibly reduce the execution of some unnecessary code in some architectures.
This patch (of 3):
Add update_mmu_tlb_range(), we can batch update tlb of an address range.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Bang Li <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Lance Yang <[email protected]> Cc: Max Filippov <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8 |
|
| #
533c67e6 |
| 27-Dec-2023 |
Kinsey Ho <[email protected]> |
mm/mglru: add dummy pmd_dirty()
Add dummy pmd_dirty() for architectures that don't provide it. This is similar to commit 6617da8fb565 ("mm: add dummy pmd_young() for architectures not having it").
mm/mglru: add dummy pmd_dirty()
Add dummy pmd_dirty() for architectures that don't provide it. This is similar to commit 6617da8fb565 ("mm: add dummy pmd_young() for architectures not having it").
Link: https://lkml.kernel.org/r/[email protected] Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Signed-off-by: Kinsey Ho <[email protected]> Suggested-by: Yu Zhao <[email protected]> Cc: Aneesh Kumar K.V <[email protected]> Cc: Donet Tom <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5 |
|
| #
15fa3e8e |
| 02-Aug-2023 |
Matthew Wilcox (Oracle) <[email protected]> |
mips: implement the new page table range API
Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places to call set_pte() instead of set_pte_at(). Add set_ptes(), update_mmu_cache_range(), flush_ica
mips: implement the new page table range API
Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places to call set_pte() instead of set_pte_at(). Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Mike Rapoport (IBM) <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7 |
|
| #
2f0584f3 |
| 13-Jun-2023 |
Rick Edgecombe <[email protected]> |
mm: Rename arch pte_mkwrite()'s to pte_mkwrite_novma()
The x86 Shadow stack feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which req
mm: Rename arch pte_mkwrite()'s to pte_mkwrite_novma()
The x86 Shadow stack feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly.
One of these unusual properties is that shadow stack memory is writable, but only in limited ways. These limits are applied via a specific PTE bit combination. Nevertheless, the memory is writable, and core mm code will need to apply the writable permissions in the typical paths that call pte_mkwrite(). The goal is to make pte_mkwrite() take a VMA, so that the x86 implementation of it can know whether to create regular writable or shadow stack mappings.
But there are a couple of challenges to this. Modifying the signatures of each arch pte_mkwrite() implementation would be error prone because some are generated with macros and would need to be re-implemented. Also, some pte_mkwrite() callers operate on kernel memory without a VMA.
So this can be done in a three step process. First pte_mkwrite() can be renamed to pte_mkwrite_novma() in each arch, with a generic pte_mkwrite() added that just calls pte_mkwrite_novma(). Next callers without a VMA can be moved to pte_mkwrite_novma(). And lastly, pte_mkwrite() and all callers can be changed to take/pass a VMA.
Start the process by renaming pte_mkwrite() to pte_mkwrite_novma() and adding the pte_mkwrite() wrapper in linux/pgtable.h. Apply the same pattern for pmd_mkwrite(). Since not all archs have a pmd_mkwrite_novma(), create a new arch config HAS_HUGE_PAGE that can be used to tell if pmd_mkwrite() should be defined. Otherwise in the !HAS_HUGE_PAGE cases the compiler would not be able to find pmd_mkwrite_novma().
No functional change.
Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Rick Edgecombe <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Mike Rapoport (IBM) <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: David Hildenbrand <[email protected]> Link: https://lore.kernel.org/lkml/CAHk-=wiZjSu7c9sFYZb3q04108stgHff2wfbokGCCgW7riz+8Q@mail.gmail.com/ Link: https://lore.kernel.org/all/20230613001108.3040476-2-rick.p.edgecombe%40intel.com
show more ...
|
|
Revision tags: v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2 |
|
| #
99c29133 |
| 06-Mar-2023 |
Gerald Schaefer <[email protected]> |
mm: add PTE pointer parameter to flush_tlb_fix_spurious_fault()
s390 can do more fine-grained handling of spurious TLB protection faults, when there also is the PTE pointer available.
Therefore, pa
mm: add PTE pointer parameter to flush_tlb_fix_spurious_fault()
s390 can do more fine-grained handling of spurious TLB protection faults, when there also is the PTE pointer available.
Therefore, pass on the PTE pointer to flush_tlb_fix_spurious_fault() as an additional parameter.
This will add no functional change to other architectures, but those with private flush_tlb_fix_spurious_fault() implementations need to be made aware of the new parameter.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Gerald Schaefer <[email protected]> Reviewed-by: Alexander Gordeev <[email protected]> Acked-by: Catalin Marinas <[email protected]> [arm64] Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: David Hildenbrand <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Borislav Petkov (AMD) <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4 |
|
| #
950fe885 |
| 13-Jan-2023 |
David Hildenbrand <[email protected]> |
mm: remove __HAVE_ARCH_PTE_SWP_EXCLUSIVE
__HAVE_ARCH_PTE_SWP_EXCLUSIVE is now supported by all architectures that support swp PTEs, so let's drop it.
Link: https://lkml.kernel.org/r/20230113171026.
mm: remove __HAVE_ARCH_PTE_SWP_EXCLUSIVE
__HAVE_ARCH_PTE_SWP_EXCLUSIVE is now supported by all architectures that support swp PTEs, so let's drop it.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
83d3b2b4 |
| 13-Jan-2023 |
David Hildenbrand <[email protected]> |
mips/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
On 64bit, steal one bit from the type. Generic MM currently only uses 5 bits for the type (MAX_SWAPFILES
mips/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
On 64bit, steal one bit from the type. Generic MM currently only uses 5 bits for the type (MAX_SWAPFILES_SHIFT), so the stolen bit is effectively unused.
On 32bit we're able to locate unused bits. As the PTE layout for 32 bit is very confusing, document it a bit better.
While at it, mask the type in __swp_entry()/mk_swap_pte().
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8 |
|
| #
6617da8f |
| 30-Nov-2022 |
Juergen Gross <[email protected]> |
mm: add dummy pmd_young() for architectures not having it
In order to avoid #ifdeffery add a dummy pmd_young() implementation as a fallback. This is required for the later patch "mm: introduce arch
mm: add dummy pmd_young() for architectures not having it
In order to avoid #ifdeffery add a dummy pmd_young() implementation as a fallback. This is required for the later patch "mm: introduce arch_has_hw_nonleaf_pmd_young()".
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Juergen Gross <[email protected]> Acked-by: Yu Zhao <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Sander Eikelenboom <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2 |
|
| #
e025ab84 |
| 18-Oct-2022 |
Kefeng Wang <[email protected]> |
mm: remove kern_addr_valid() completely
Most architectures (except arm64/x86/sparc) simply return 1 for kern_addr_valid(), which is only used in read_kcore(), and it calls copy_from_kernel_nofault()
mm: remove kern_addr_valid() completely
Most architectures (except arm64/x86/sparc) simply return 1 for kern_addr_valid(), which is only used in read_kcore(), and it calls copy_from_kernel_nofault() which could check whether the address is a valid kernel address. So as there is no need for kern_addr_valid(), let's remove it.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> [m68k] Acked-by: Heiko Carstens <[email protected]> [s390] Acked-by: Christoph Hellwig <[email protected]> Acked-by: Helge Deller <[email protected]> [parisc] Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Guo Ren <[email protected]> [csky] Acked-by: Catalin Marinas <[email protected]> [arm64] Cc: Alexander Gordeev <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Anton Ivanov <[email protected]> Cc: <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David S. Miller <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: James Bottomley <[email protected]> Cc: Johannes Berg <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Xuerui Wang <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7 |
|
| #
499c1dd9 |
| 11-Jul-2022 |
Anshuman Khandual <[email protected]> |
mips/mm: enable ARCH_HAS_VM_GET_PAGE_PROT
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks up a
mips/mm: enable ARCH_HAS_VM_GET_PAGE_PROT
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks up a private and static protection_map[] array. Subsequently all __SXXX and __PXXX macros can be dropped which are no longer needed.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Sam Ravnborg <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5 |
|
| #
177bd2a9 |
| 14-Feb-2022 |
Matthew Wilcox (Oracle) <[email protected]> |
mips: Make pmd_pfn() available in all configurations
Whether or not the platform supports PMD sized pages, we need to provide pmd_pfn() for an upcoming patch.
Signed-off-by: Matthew Wilcox (Oracle)
mips: Make pmd_pfn() available in all configurations
Whether or not the platform supports PMD sized pages, we need to provide pmd_pfn() for an upcoming patch.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1 |
|
| #
f69fa4c8 |
| 02-Nov-2021 |
Zhaolong Zhang <[email protected]> |
mips: fix HUGETLB function without THP enabled
ltp test futex_wake04 without THP enabled leads to below bt: [<ffffffff80a03728>] BUG+0x0/0x8 [<ffffffff80a0624c>] internal_get_user_pages_fast+0x8
mips: fix HUGETLB function without THP enabled
ltp test futex_wake04 without THP enabled leads to below bt: [<ffffffff80a03728>] BUG+0x0/0x8 [<ffffffff80a0624c>] internal_get_user_pages_fast+0x81c/0x820 [<ffffffff8093ac18>] get_futex_key+0xa0/0x480 [<ffffffff8093b074>] futex_wait_setup+0x7c/0x1a8 [<ffffffff8093b2c0>] futex_wait+0x120/0x228 [<ffffffff8093dbe8>] do_futex+0x140/0xbd8 [<ffffffff8093e78c>] sys_futex+0x10c/0x1c0 [<ffffffff808703d0>] syscall_common+0x34/0x58
Move pmd_write() and pmd_page() from TRANSPARENT_HUGEPAGE scope to MIPS_HUGE_TLB_SUPPORT scope, because both THP and HUGETLB will need them.
Signed-off-by: Zhaolong Zhang <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
|
Revision tags: v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5 |
|
| #
a2fa4ced |
| 21-Jan-2021 |
Yanteng Si <[email protected]> |
MIPS: mm: Add prototype for function __update_cache
This commit adds a prototype to fix error at W=1:
arch/mips/mm/cache.c:129:6: error: no previous prototype for '__update_cache' [-Werror=missing-
MIPS: mm: Add prototype for function __update_cache
This commit adds a prototype to fix error at W=1:
arch/mips/mm/cache.c:129:6: error: no previous prototype for '__update_cache' [-Werror=missing-prototypes]
Signed-off-by: Yanteng Si <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
|
Revision tags: v5.11-rc4 |
|
| #
cabcff9b |
| 14-Jan-2021 |
Alexander Lobakin <[email protected]> |
MIPS: pgtable: fix -Wshadow in asm/pgtable.h
Solves the following repetitive warning when building with -Wshadow:
In file included from ./include/linux/pgtable.h:6, from ./include/
MIPS: pgtable: fix -Wshadow in asm/pgtable.h
Solves the following repetitive warning when building with -Wshadow:
In file included from ./include/linux/pgtable.h:6, from ./include/linux/mm.h:33, from ./include/linux/dax.h:6, from ./include/linux/mempolicy.h:11, from kernel/fork.c:34: ./arch/mips/include/asm/mmu_context.h: In function ‘switch_mm’: ./arch/mips/include/asm/pgtable.h:97:16: warning: declaration of ‘flags’ shadows a previous local [-Wshadow] 97 | unsigned long flags; \ | ^~~~~ ./arch/mips/include/asm/mmu_context.h:162:2: note: in expansion of macro ‘htw_stop’ 162 | htw_stop(); | ^~~~~~~~ In file included from kernel/fork.c:102: ./arch/mips/include/asm/mmu_context.h:159:16: note: shadowed declaration is here 159 | unsigned long flags; | ^~~~~
Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
|
Revision tags: v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6 |
|
| #
41bb1a9b |
| 27-Nov-2020 |
Thomas Bogendoerfer <[email protected]> |
MIPS: mm: Add back define for PAGE_SHARED
There are still some drivers using PAGE_SHARED constant so put it back.
Reported-by: kernel test robot <[email protected]> Signed-off-by: Thomas Bogendoerfer <
MIPS: mm: Add back define for PAGE_SHARED
There are still some drivers using PAGE_SHARED constant so put it back.
Reported-by: kernel test robot <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc5, v5.10-rc4 |
|
| #
0df162e1 |
| 13-Nov-2020 |
Thomas Bogendoerfer <[email protected]> |
MIPS: mm: Clean up setup of protection map
Protection map difference between RIXI and non RIXI cpus is _PAGE_NO_EXEC and _PAGE_NO_READ usage. Both already take care of cpu_has_rixi while setting up
MIPS: mm: Clean up setup of protection map
Protection map difference between RIXI and non RIXI cpus is _PAGE_NO_EXEC and _PAGE_NO_READ usage. Both already take care of cpu_has_rixi while setting up the page bits. So we just need one setup of protection map and can drop the now unused (and broken for RIXI) PAGE_* defines.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9 |
|
| #
9b722483 |
| 05-Oct-2020 |
Thomas Bogendoerfer <[email protected]> |
MIPS: pgtable: Remove used PAGE_USERIO define
There are no users of PAGE_USERIO.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
|
|
Revision tags: v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1 |
|
| #
ca5999fd |
| 09-Jun-2020 |
Mike Rapoport <[email protected]> |
mm: introduce include/linux/pgtable.h
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgta
mm: introduce include/linux/pgtable.h
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions.
Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h.
Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
86ec2da0 |
| 03-Jun-2020 |
Anshuman Khandual <[email protected]> |
mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid()
pmd_present() is expected to test positive after pmdp_mknotpresent() as the PMD entry still points to a valid huge page in memory. pmdp_mknotpres
mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid()
pmd_present() is expected to test positive after pmdp_mknotpresent() as the PMD entry still points to a valid huge page in memory. pmdp_mknotpresent() implies that given PMD entry is just invalidated from MMU perspective while still holding on to pmd_page() referred valid huge page thus also clearing pmd_present() test. This creates the following situation which is counter intuitive.
[pmd_present(pmd_mknotpresent(pmd)) = true]
This renames pmd_mknotpresent() as pmd_mkinvalid() reflecting the helper's functionality more accurately while changing the above mentioned situation as follows. This does not create any functional change.
[pmd_present(pmd_mkinvalid(pmd)) = true]
This is not applicable for platforms that define own pmdp_invalidate() via __HAVE_ARCH_PMDP_INVALIDATE. Suggestion for renaming came during a previous discussion here.
https://patchwork.kernel.org/patch/11019637/
[[email protected]: change pmd_mknotvalid() to pmd_mkinvalid() per Will] Link: http://lkml.kernel.org/r/[email protected] Suggested-by: Catalin Marinas <[email protected]> Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.7 |
|
| #
273b5fa0 |
| 27-May-2020 |
Bibo Mao <[email protected]> |
MIPS: mm: add page valid judgement in function pte_modify
If original PTE has _PAGE_ACCESSED bit set, and new pte has no _PAGE_NO_READ bit set, we can add _PAGE_SILENT_READ bit to enable page valid
MIPS: mm: add page valid judgement in function pte_modify
If original PTE has _PAGE_ACCESSED bit set, and new pte has no _PAGE_NO_READ bit set, we can add _PAGE_SILENT_READ bit to enable page valid bit.
Signed-off-by: Bibo Mao <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
| #
44bf431b |
| 27-May-2020 |
Bibo Mao <[email protected]> |
mm/memory.c: Add memory read privilege on page fault handling
Here add pte_sw_mkyoung function to make page readable on MIPS platform during page fault handling. This patch improves page fault laten
mm/memory.c: Add memory read privilege on page fault handling
Here add pte_sw_mkyoung function to make page readable on MIPS platform during page fault handling. This patch improves page fault latency about 10% on my MIPS machine with lmbench lat_pagefault case.
It is noop function on other arches, there is no negative influence on those architectures.
Signed-off-by: Bibo Mao <[email protected]> Acked-by: Andrew Morton <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
| #
7df67697 |
| 27-May-2020 |
Bibo Mao <[email protected]> |
mm/memory.c: Update local TLB if PTE entry exists
If two threads concurrently fault at the same page, the thread that won the race updates the PTE and its local TLB. For now, the other thread gives
mm/memory.c: Update local TLB if PTE entry exists
If two threads concurrently fault at the same page, the thread that won the race updates the PTE and its local TLB. For now, the other thread gives up, simply does nothing, and continues.
It could happen that this second thread triggers another fault, whereby it only updates its local TLB while handling the fault. Instead of triggering another fault, let's directly update the local TLB of the second thread. Function update_mmu_tlb is used here to update local TLB on the second thread, and it is defined as empty on other arches.
Signed-off-by: Bibo Mao <[email protected]> Acked-by: Andrew Morton <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
| #
4dd7683e |
| 27-May-2020 |
Bibo Mao <[email protected]> |
MIPS: Do not flush tlb page when updating PTE entry
It is not necessary to flush tlb page on all CPUs if suitable PTE entry exists already during page fault handling, just updating TLB is fine.
Her
MIPS: Do not flush tlb page when updating PTE entry
It is not necessary to flush tlb page on all CPUs if suitable PTE entry exists already during page fault handling, just updating TLB is fine.
Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system.
Signed-off-by: Bibo Mao <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|
|
Revision tags: v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4 |
|
| #
2971317a |
| 29-Apr-2020 |
Guoyun Sun <[email protected]> |
mips/mm: Add page soft dirty tracking
User space checkpoint and restart tool (CRIU) needs the page's change to be soft tracked. This allows to do a pre checkpoint and then dump only touched pages.
mips/mm: Add page soft dirty tracking
User space checkpoint and restart tool (CRIU) needs the page's change to be soft tracked. This allows to do a pre checkpoint and then dump only touched pages.
Signed-off-by: Guoyun Sun <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
show more ...
|