|
Revision tags: v6.15, v6.15-rc7 |
|
| #
97dfbbd1 |
| 14-May-2025 |
Matthew Wilcox (Oracle) <[email protected]> |
highmem: add folio_test_partial_kmap()
In commit c749d9b7ebbc ("iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP"), Hugh correctly noted that if KMAP_LOCAL_FORCE_MAP is enabled, we
highmem: add folio_test_partial_kmap()
In commit c749d9b7ebbc ("iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP"), Hugh correctly noted that if KMAP_LOCAL_FORCE_MAP is enabled, we must limit ourselves to PAGE_SIZE bytes per call to kmap_local(). The same problem exists in memcpy_from_folio(), memcpy_to_folio(), folio_zero_tail(), folio_fill_tail() and memcpy_from_file_folio(), so add folio_test_partial_kmap() to do this more succinctly.
Link: https://lkml.kernel.org/r/[email protected] Fixes: 00cdf76012ab ("mm: add memcpy_from_file_folio()") Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Cc: Al Viro <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7 |
|
| #
b98072af |
| 08-Jan-2025 |
Yu Zhao <[email protected]> |
mm/hugetlb_vmemmap: fix memory loads ordering
Using x86_64 as an example, for a 32KB struct page[] area describing a 2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
1. Split the (r
mm/hugetlb_vmemmap: fix memory loads ordering
Using x86_64 as an example, for a 32KB struct page[] area describing a 2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs; 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped by PTE 0, and at the same time change the permission from r/w to r/o; 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB to 4KB.
However, the following race can happen due to improperly memory loads ordering: CPU 1 (HVO) CPU 2 (speculative PFN walker)
page_ref_freeze() synchronize_rcu() rcu_read_lock() page_is_fake_head() is false vmemmap_remap_pte() XXX: struct page[] becomes r/o
page_ref_unfreeze() page_ref_count() is not zero
atomic_add_unless(&page->_refcount) XXX: try to modify r/o struct page[]
Specifically, page_is_fake_head() must be ordered after page_ref_count() on CPU 2 so that it can only return true for this case, to avoid the later attempt to modify r/o struct page[].
This patch adds the missing memory barrier and makes the tests on page_is_fake_head() and page_ref_count() done in the proper order.
Link: https://lkml.kernel.org/r/[email protected] Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers") Signed-off-by: Yu Zhao <[email protected]> Reported-by: Will Deacon <[email protected]> Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/ Reviewed-by: David Hildenbrand <[email protected]> Reviewed-by: Muchun Song <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Mateusz Guzik <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
5f5ee52d |
| 18-Mar-2025 |
Jinjiang Tu <[email protected]> |
mm/hwpoison: introduce folio_contain_hwpoisoned_page() helper
Patch series "mm/vmscan: don't try to reclaim hwpoison folio".
Fix a bug during memory reclaim if folio is hwpoisoned.
This patch (of
mm/hwpoison: introduce folio_contain_hwpoisoned_page() helper
Patch series "mm/vmscan: don't try to reclaim hwpoison folio".
Fix a bug during memory reclaim if folio is hwpoisoned.
This patch (of 2):
Introduce helper folio_contain_hwpoisoned_page() to check if the entire folio is hwpoisoned or it contains hwpoisoned pages.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jinjiang Tu <[email protected]> Acked-by: Miaohe Lin <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Kefeng Wang <[email protected]> Cc: Nanyong Sun <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: <stable@vger,kernel.org> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
6af8cb80 |
| 03-Mar-2025 |
David Hildenbrand <[email protected]> |
mm/rmap: basic MM owner tracking for large folios (!hugetlb)
For small folios, we traditionally use the mapcount to decide whether it was "certainly mapped exclusively" by a single MM (mapcount == 1
mm/rmap: basic MM owner tracking for large folios (!hugetlb)
For small folios, we traditionally use the mapcount to decide whether it was "certainly mapped exclusively" by a single MM (mapcount == 1) or whether it "maybe mapped shared" by multiple MMs (mapcount > 1). For PMD-sized folios that were PMD-mapped, we were able to use a similar mechanism (single PMD mapping), but for PTE-mapped folios and in the future folios that span multiple PMDs, this does not work.
So we need a different mechanism to handle large folios. Let's add a new mechanism to detect whether a large folio is "certainly mapped exclusively", or whether it is "maybe mapped shared".
We'll use this information next to optimize CoW reuse for PTE-mapped anonymous THP, and to convert folio_likely_mapped_shared() to folio_maybe_mapped_shared(), independent of per-page mapcounts.
For each large folio, we'll have two slots, whereby a slot stores: (1) an MM id: unique id assigned to each MM (2) a per-MM mapcount
If a slot is unoccupied, it can be taken by the next MM that maps folio page.
In addition, we'll remember the current state -- "mapped exclusively" vs. "maybe mapped shared" -- and use a bit spinlock to sync on updates and to reduce the total number of atomic accesses on updates. In the future, it might be possible to squeeze a proper spinlock into "struct folio". For now, keep it simple, as we require the whole thing with THP only, that is incompatible with RT.
As we have to squeeze this information into the "struct folio" of even folios of order-1 (2 pages), and we generally want to reduce the required metadata, we'll assign each MM a unique ID that can fit into an int. In total, we can squeeze everything into 4x int (2x long) on 64bit.
32bit support is a bit challenging, because we only have 2x long == 2x int in order-1 folios. But we can make it work for now, because we neither expect many MMs nor very large folios on 32bit.
We will reliably detect folios as "mapped exclusively" vs. "mapped shared" as long as only two MMs map pages of a folio at one point in time -- for example with fork() and short-lived child processes, or with apps that hand over state from one instance to another.
As soon as three MMs are involved at the same time, we might detect "maybe mapped shared" although the folio is "mapped exclusively".
Example 1:
(1) App1 faults in a (shmem/file-backed) folio page -> Tracked as MM0 (2) App2 faults in a folio page -> Tracked as MM1 (4) App1 unmaps all folio pages
-> We will detect "mapped exclusively".
Example 2:
(1) App1 faults in a (shmem/file-backed) folio page -> Tracked as MM0 (2) App2 faults in a folio page -> Tracked as MM1 (3) App3 faults in a folio page -> No slot available, tracked as "unknown" (4) App1 and App2 unmap all folio pages
-> We will detect "maybe mapped shared".
Make use of __always_inline to keep possible performance degradation when (un)mapping large folios to a minimum.
Note: by squeezing the two flags into the "unsigned long" that stores the MM ids, we can use non-atomic __bit_spin_unlock() and non-atomic setting/clearing of the "maybe mapped shared" bit, effectively not adding any new atomics on the hot path when updating the large mapcount + new metadata, which further helps reduce the runtime overhead in micro-benchmarks.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Cc: Andy Lutomirks^H^Hski <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jann Horn <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Lance Yang <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Lorenzo Stoakes <[email protected]> Cc: Matthew Wilcow (Oracle) <[email protected]> Cc: Michal Koutn <[email protected]> Cc: Muchun Song <[email protected]> Cc: tejun heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Zefan Li <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
cbe298d8 |
| 28-Feb-2025 |
Alistair Popple <[email protected]> |
fs/dax: remove PAGE_MAPPING_DAX_SHARED mapping flag
The page ->mapping pointer can have magic values like PAGE_MAPPING_DAX_SHARED and PAGE_MAPPING_ANON for page owner specific usage. Currently PAGE
fs/dax: remove PAGE_MAPPING_DAX_SHARED mapping flag
The page ->mapping pointer can have magic values like PAGE_MAPPING_DAX_SHARED and PAGE_MAPPING_ANON for page owner specific usage. Currently PAGE_MAPPING_DAX_SHARED and PAGE_MAPPING_ANON alias to the same value. This isn't a problem because FS DAX pages are never seen by the anonymous mapping code and vice versa.
However a future change will make FS DAX pages more like normal pages, so folio_test_anon() must not return true for a FS DAX page.
We could explicitly test for a FS DAX page in folio_test_anon(), etc. however the PAGE_MAPPING_DAX_SHARED flag isn't actually needed. Instead we can use the page->mapping field to implicitly track the first mapping of a page. If page->mapping is non-NULL it implies the page is associated with a single mapping at page->index. If the page is associated with a second mapping clear page->mapping and set page->share to 1.
This is possible because a shared mapping implies the file-system implements dax_holder_operations which makes the ->mapping and ->index, which is a union with ->share, unused.
The page is considered shared when page->mapping == NULL and page->share > 0 or page->mapping != NULL, implying it is present in at least one address space. This also makes it easier for a future change to detect when a page is first mapped into an address space which requires special handling.
Link: https://lkml.kernel.org/r/c22f699202db0acee2f7039eb026e68261ce42d6.1740713401.git-series.apopple@nvidia.com Signed-off-by: Alistair Popple <[email protected]> Tested-by: Alison Schofield <[email protected]> Cc: Asahi Lina <[email protected]> Cc: Bjorn Helgaas <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chunyan Zhang <[email protected]> Cc: Dan Wiliams <[email protected]> Cc: "Darrick J. Wong" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Dave Jiang <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: John Hubbard <[email protected]> Cc: linmiaohe <[email protected]> Cc: Logan Gunthorpe <[email protected]> Cc: Matthew Wilcow (Oracle) <[email protected]> Cc: Michael "Camp Drill Sergeant" Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Peter Xu <[email protected]> Cc: Ted Ts'o <[email protected]> Cc: Vishal Verma <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Will Deacon <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vivek Goyal <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
a6687c8f |
| 03-Mar-2025 |
Matthew Wilcox (Oracle) <[email protected]> |
slab: Mark large folios for debugging purposes
If a user calls p = kmalloc(1024); kfree(p); kfree(p); and 'p' was the only object in the slab, we may free the slab after the first call to kfree().
slab: Mark large folios for debugging purposes
If a user calls p = kmalloc(1024); kfree(p); kfree(p); and 'p' was the only object in the slab, we may free the slab after the first call to kfree(). If we do, we clear PGTY_slab and the second call to kfree() will call free_large_kmalloc(). That will leave a trace in the logs ("object pointer: 0x%p"), but otherwise proceed to free the memory, which is likely to corrupt the page allocator's metadata.
Allocate a new page type for large kmalloc and mark the memory with it while it's allocated. That lets us detect this double-free and return without harming any data structures.
Reported-by: Hannes Reinecke <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Harry Yoo <[email protected]> Signed-off-by: Vlastimil Babka <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc6, v6.13-rc5, v6.13-rc4 |
|
| #
cceba6f7 |
| 20-Dec-2024 |
Jens Axboe <[email protected]> |
mm: add PG_dropbehind folio flag
Add a folio flag that file IO can use to indicate that the cached IO being done should be dropped from the page cache upon completion.
Link: https://lkml.kernel.org
mm: add PG_dropbehind folio flag
Add a folio flag that file IO can use to indicate that the cached IO being done should be dropped from the page cache upon completion.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]> Reviewed-by: Kirill A. Shutemov <[email protected]> Cc: Brian Foster <[email protected]> Cc: Chris Mason <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
8d91fed8 |
| 13-Jan-2025 |
David Hildenbrand <[email protected]> |
mm/huge_memory: convert has_hwpoisoned into a pure folio flag
Patch series "mm: hugetlb+THP folio and migration cleanups", v2.
Some cleanups around more folio conversion and migration handling that
mm/huge_memory: convert has_hwpoisoned into a pure folio flag
Patch series "mm: hugetlb+THP folio and migration cleanups", v2.
Some cleanups around more folio conversion and migration handling that I collected working on random stuff.
This patch (of 6):
Let's stop setting it on pages, there is no need to anymore.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Cc: Muchun Song <[email protected]> Cc: Baolin Wang <[email protected]> Cc: Sidhartha Kumar <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
d670c8e5 |
| 09-Jan-2025 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageTransTail()
The last caller was removed in October. Also remove the FALSE definition of PageTransCompoundMap(); the normal definition was removed a few years ago.
Link: https://lkml
mm: remove PageTransTail()
The last caller was removed in October. Also remove the FALSE definition of PageTransCompoundMap(); the normal definition was removed a few years ago.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc3 |
|
| #
42b2eb69 |
| 12-Dec-2024 |
Usama Arif <[email protected]> |
mm: convert partially_mapped set/clear operations to be atomic
Other page flags in the 2nd page, like PG_hwpoison and PG_anon_exclusive can get modified concurrently. Changes to other page flags mi
mm: convert partially_mapped set/clear operations to be atomic
Other page flags in the 2nd page, like PG_hwpoison and PG_anon_exclusive can get modified concurrently. Changes to other page flags might be lost if they are happening at the same time as non-atomic partially_mapped operations. Hence, make partially_mapped operations atomic.
Link: https://lkml.kernel.org/r/[email protected] Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios") Reported-by: David Hildenbrand <[email protected]> Link: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Usama Arif <[email protected]> Acked-by: David Hildenbrand <[email protected]> Acked-by: Johannes Weiner <[email protected]> Acked-by: Roman Gushchin <[email protected]> Cc: Barry Song <[email protected]> Cc: Domenico Cerasuolo <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mike Rapoport (Microsoft) <[email protected]> Cc: Nico Pache <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Yu Zhao <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc2, v6.13-rc1 |
|
| #
4de22b2a |
| 25-Nov-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: open-code PageTail in folio_flags() and const_folio_flags()
It is unsafe to call PageTail() in dump_page() as page_is_fake_head() will almost certainly return true when called on a head page tha
mm: open-code PageTail in folio_flags() and const_folio_flags()
It is unsafe to call PageTail() in dump_page() as page_is_fake_head() will almost certainly return true when called on a head page that is copied to the stack. That will cause the VM_BUG_ON_PGFLAGS() in const_folio_flags() to trigger when it shouldn't. Fortunately, we don't need to call PageTail() here; it's fine to have a pointer to a virtual alias of the page's flag word rather than the real page's flag word.
Link: https://lkml.kernel.org/r/[email protected] Fixes: fae7d834c43c ("mm: add __dump_folio()") Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Cc: Kees Cook <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2 |
|
| #
b9a25635 |
| 02-Oct-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageKsm()
All callers have been converted to use folio_test_ksm() or PageAnonNotKsm(), so we can remove this wrapper.
Link: https://lkml.kernel.org/r/20241002152533.1350629-6-willy@infra
mm: remove PageKsm()
All callers have been converted to use folio_test_ksm() or PageAnonNotKsm(), so we can remove this wrapper.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Alex Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
b33cc96c |
| 02-Oct-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: add PageAnonNotKsm()
Check that this anonymous page is really anonymous, not anonymous-or-KSM. This optimises the debug check, but its real purpose is to remove the last two users of PageKsm().
mm: add PageAnonNotKsm()
Check that this anonymous page is really anonymous, not anonymous-or-KSM. This optimises the debug check, but its real purpose is to remove the last two users of PageKsm().
[[email protected]: fix assertions] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Alex Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
9d08ec41 |
| 20-Oct-2024 |
Yu Zhao <[email protected]> |
mm: allow set/clear page_type again
Some page flags (page->flags) were converted to page types (page->page_types). A recent example is PG_hugetlb.
From the exclusive writer's perspective, e.g., a
mm: allow set/clear page_type again
Some page flags (page->flags) were converted to page types (page->page_types). A recent example is PG_hugetlb.
From the exclusive writer's perspective, e.g., a thread doing __folio_set_hugetlb(), there is a difference between the page flag and type APIs: the former allows the same non-atomic operation to be repeated whereas the latter does not. For example, calling __folio_set_hugetlb() twice triggers VM_BUG_ON_FOLIO(), since the second call expects the type (PG_hugetlb) not to be set previously.
Using add_hugetlb_folio() as an example, it calls __folio_set_hugetlb() in the following error-handling path. And when that happens, it triggers the aforementioned VM_BUG_ON_FOLIO().
if (folio_test_hugetlb(folio)) { rc = hugetlb_vmemmap_restore_folio(h, folio); if (rc) { spin_lock_irq(&hugetlb_lock); add_hugetlb_folio(h, folio, false); ...
It is possible to make hugeTLB comply with the new requirements from the page type API. However, a straightforward fix would be to just allow the same page type to be set or cleared again inside the API, to avoid any changes to its callers.
Link: https://lkml.kernel.org/r/[email protected] Fixes: d99e3140a4d3 ("mm: turn folio_test_hugetlb into a PageType") Signed-off-by: Yu Zhao <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Muchun Song <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
fd15ba4c |
| 02-Oct-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
ceph: Remove call to PagePrivate2()
Use the folio that we already have to call folio_test_private_2() instead. This is the last call to PagePrivate2(), so replace its PAGEFLAG() definition with FOL
ceph: Remove call to PagePrivate2()
Use the folio that we already have to call folio_test_private_2() instead. This is the last call to PagePrivate2(), so replace its PAGEFLAG() definition with FOLIO_FLAG().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
a04d5f82 |
| 02-Oct-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: Remove PageMappedToDisk
All callers have now been converted to the folio APIs, so remove the page API for this flag.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Link: https://l
mm: Remove PageMappedToDisk
All callers have now been converted to the folio APIs, so remove the page API for this flag.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Jan Kara <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6 |
|
| #
8422acdc |
| 30-Aug-2024 |
Usama Arif <[email protected]> |
mm: introduce a pageflag for partially mapped folios
Currently folio->_deferred_list is used to keep track of partially_mapped folios that are going to be split under memory pressure. In the next p
mm: introduce a pageflag for partially mapped folios
Currently folio->_deferred_list is used to keep track of partially_mapped folios that are going to be split under memory pressure. In the next patch, all THPs that are faulted in and collapsed by khugepaged are also going to be tracked using _deferred_list.
This patch introduces a pageflag to be able to distinguish between partially mapped folios and others in the deferred_list at split time in deferred_split_scan. Its needed as __folio_remove_rmap decrements _mapcount, _large_mapcount and _entire_mapcount, hence it won't be possible to distinguish between partially mapped folios and others in deferred_split_scan.
Eventhough it introduces an extra flag to track if the folio is partially mapped, there is no functional change intended with this patch and the flag is not useful in this patch itself, it will become useful in the next patch when _deferred_list has non partially mapped folios.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Usama Arif <[email protected]> Cc: Alexander Zhu <[email protected]> Cc: Barry Song <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Domenico Cerasuolo <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kairui Song <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nico Pache <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Yu Zhao <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc5 |
|
| #
7a87225a |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
x86: remove PG_uncached
Convert x86 to use PG_arch_2 instead of PG_uncached and remove PG_uncached.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matt
x86: remove PG_uncached
Convert x86 to use PG_arch_2 instead of PG_uncached and remove PG_uncached.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
02e1960a |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: rename PG_mappedtodisk to PG_owner_2
This flag has similar constraints to PG_owner_priv_1 -- it is ignored by core code, and is entirely for the use of the code which allocated the folio. Since
mm: rename PG_mappedtodisk to PG_owner_2
This flag has similar constraints to PG_owner_priv_1 -- it is ignored by core code, and is entirely for the use of the code which allocated the folio. Since the pagecache does not use it, individual filesystems can use it. The bufferhead code does use it, so filesystems which use the buffer cache must not use it for another purpose.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
6dc15138 |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove page_has_private()
This function has no more callers, except folio_has_private(). Combine the two functions.
Link: https://lkml.kernel.org/r/[email protected]
mm: remove page_has_private()
This function has no more callers, except folio_has_private(). Combine the two functions.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
3026bc1e |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageOwnerPriv1
While there are many aliases for this flag, nobody actually uses the *PageOwnerPriv1() nor folio_*_owner_priv_1() accessors. Remove them.
Link: https://lkml.kernel.org/r/
mm: remove PageOwnerPriv1
While there are many aliases for this flag, nobody actually uses the *PageOwnerPriv1() nor folio_*_owner_priv_1() accessors. Remove them.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
99f86bbd |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageMlocked
This flag is now only used on folios, so we can remove all the page accessors.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mat
mm: remove PageMlocked
This flag is now only used on folios, so we can remove all the page accessors.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
cb29e794 |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageUnevictable
There is only one caller of PageUnevictable() left; convert it to call folio_test_unevictable() and remove all the page accessors.
Link: https://lkml.kernel.org/r/2024082
mm: remove PageUnevictable
There is only one caller of PageUnevictable() left; convert it to call folio_test_unevictable() and remove all the page accessors.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
32f51ead |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageSwapCache
This flag is now only used on folios, so we can remove all the page accessors and reword the comments that refer to them.
Link: https://lkml.kernel.org/r/20240821193445.229
mm: remove PageSwapCache
This flag is now only used on folios, so we can remove all the page accessors and reword the comments that refer to them.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
6f394ee9 |
| 21-Aug-2024 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: remove PageReadahead
This flag is now only used on folios, so we can remove all the page accessors.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: M
mm: remove PageReadahead
This flag is now only used on folios, so we can remove all the page accessors.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|