|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2 |
|
| #
b745fdef |
| 29-Jul-2024 |
Dave Martin <[email protected]> |
docs/core-api: memory-allocation: GFP_NOWAIT doesn't need __GFP_NOWARN
Since v6.8 the definition of GFP_NOWAIT has implied __GFP_NOWARN, so it is now redundant to add this flag explicitly.
Update t
docs/core-api: memory-allocation: GFP_NOWAIT doesn't need __GFP_NOWARN
Since v6.8 the definition of GFP_NOWAIT has implied __GFP_NOWARN, so it is now redundant to add this flag explicitly.
Update the docs to match, and emphasise the need for a fallback when using GFP_NOWAIT.
Fixes: 16f5dfbc851b ("gfp: include __GFP_NOWARN in GFP_NOWAIT") Signed-off-by: Dave Martin <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.11-rc1, v6.10, v6.10-rc7 |
|
| #
ad59baa3 |
| 03-Jul-2024 |
Vlastimil Babka <[email protected]> |
slab, rust: extend kmalloc() alignment guarantees to remove Rust padding
Slab allocators have been guaranteeing natural alignment for power-of-two sizes since commit 59bb47985c1d ("mm, sl[aou]b: gua
slab, rust: extend kmalloc() alignment guarantees to remove Rust padding
Slab allocators have been guaranteeing natural alignment for power-of-two sizes since commit 59bb47985c1d ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)"), while any other sizes are guaranteed to be aligned only to ARCH_KMALLOC_MINALIGN bytes (although in practice are aligned more than that in non-debug scenarios).
Rust's allocator API specifies size and alignment per allocation, which have to satisfy the following rules, per Alice Ryhl [1]:
1. The alignment is a power of two. 2. The size is non-zero. 3. When you round up the size to the next multiple of the alignment, then it must not overflow the signed type isize / ssize_t.
In order to map this to kmalloc()'s guarantees, some requested allocation sizes have to be padded to the next power-of-two size [2]. For example, an allocation of size 96 and alignment of 32 will be padded to an allocation of size 128, because the existing kmalloc-96 bucket doesn't guarantee alignent above ARCH_KMALLOC_MINALIGN. Without slab debugging active, the layout of the kmalloc-96 slabs however naturally align the objects to 32 bytes, so extending the size to 128 bytes is wasteful.
To improve the situation we can extend the kmalloc() alignment guarantees in a way that
1) doesn't change the current slab layout (and thus does not increase internal fragmentation) when slab debugging is not active 2) reduces waste in the Rust allocator use case 3) is a superset of the current guarantee for power-of-two sizes.
The extended guarantee is that alignment is at least the largest power-of-two divisor of the requested size. For power-of-two sizes the largest divisor is the size itself, but let's keep this case documented separately for clarity.
For current kmalloc size buckets, it means kmalloc-96 will guarantee alignment of 32 bytes and kmalloc-196 will guarantee 64 bytes.
This covers the rules 1 and 2 above of Rust's API as long as the size is a multiple of the alignment. The Rust layer should now only need to round up the size to the next multiple if it isn't, while enforcing the rule 3.
Implementation-wise, this changes the alignment calculation in create_boot_cache(). While at it also do the calulation only for caches with the SLAB_KMALLOC flag, because the function is also used to create the initial kmem_cache and kmem_cache_node caches, where no alignment guarantee is necessary.
In the Rust allocator's krealloc_aligned(), remove the code that padded sizes to the next power of two (suggested by Alice Ryhl) as it's no longer necessary with the new guarantees.
Reported-by: Alice Ryhl <[email protected]> Reported-by: Boqun Feng <[email protected]> Link: https://lore.kernel.org/all/CAH5fLggjrbdUuT-H-5vbQfMazjRDpp2%2Bk3%[email protected]/ [1] Link: https://lore.kernel.org/all/CAH5fLghsZRemYUwVvhk77o6y1foqnCeDzW4WZv6ScEWna2+_jw@mail.gmail.com/ [2] Reviewed-by: Boqun Feng <[email protected]> Acked-by: Roman Gushchin <[email protected]> Reviewed-by: Alice Ryhl <[email protected]> Signed-off-by: Vlastimil Babka <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1 |
|
| #
ae65a521 |
| 02-Mar-2023 |
Vlastimil Babka <[email protected]> |
mm/slab: document kfree() as allowed for kmem_cache_alloc() objects
This will make it easier to free objects in situations when they can come from either kmalloc() or kmem_cache_alloc(), and also al
mm/slab: document kfree() as allowed for kmem_cache_alloc() objects
This will make it easier to free objects in situations when they can come from either kmalloc() or kmem_cache_alloc(), and also allow kfree_rcu() for freeing objects from kmem_cache_alloc().
For the SLAB and SLUB allocators this was always possible so with SLOB gone, we can document it as supported.
Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: Mike Rapoport (IBM) <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Neeraj Upadhyay <[email protected]> Cc: Josh Triplett <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Joel Fernandes <[email protected]>
show more ...
|
|
Revision tags: v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1 |
|
| #
f0dbd2bd |
| 15-Dec-2020 |
Bartosz Golaszewski <[email protected]> |
mm: slab: provide krealloc_array()
When allocating an array of elements, users should check for multiplication overflow or preferably use one of the provided helpers like: kmalloc_array().
There's
mm: slab: provide krealloc_array()
When allocating an array of elements, users should check for multiplication overflow or preferably use one of the provided helpers like: kmalloc_array().
There's no krealloc_array() counterpart but there are many users who use regular krealloc() to reallocate arrays. Let's provide an actual krealloc_array() implementation.
While at it: add some documentation regarding krealloc.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Bartosz Golaszewski <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andy Shevchenko <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Knig <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Airlie <[email protected]> Cc: David Rientjes <[email protected]> Cc: Gustavo Padovan <[email protected]> Cc: James Morse <[email protected]> Cc: Jaroslav Kysela <[email protected]> Cc: Jason Wang <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Linus Walleij <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: Mauro Carvalho Chehab <[email protected]> Cc: Maxime Ripard <[email protected]> Cc: "Michael S . Tsirkin" <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Robert Richter <[email protected]> Cc: Sumit Semwal <[email protected]> Cc: Takashi Iwai <[email protected]> Cc: Takashi Iwai <[email protected]> Cc: Thomas Zimmermann <[email protected]> Cc: Tony Luck <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6 |
|
| #
00bafa57 |
| 19-Jul-2020 |
Mike Rapoport <[email protected]> |
docs/core-api: memory-allocation: describe reclaim behaviour
Changelog of commit dcda9b04713c ("mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic") has very nice d
docs/core-api: memory-allocation: describe reclaim behaviour
Changelog of commit dcda9b04713c ("mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic") has very nice description of GFP flags that affect reclaim behaviour of the page allocator.
It would be pity to keep this description buried in the log so let's expose it in the Documentation/ as well.
Cc: Michal Hocko <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
|
Revision tags: v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5 |
|
| #
1c16b3d5 |
| 24-Oct-2019 |
Chris Packham <[email protected]> |
docs/core-api: memory-allocation: mention size helpers
Mention struct_size(), array_size() and array3_size() in the same place as kmalloc() and friends.
Signed-off-by: Chris Packham <chris.packham@
docs/core-api: memory-allocation: mention size helpers
Mention struct_size(), array_size() and array3_size() in the same place as kmalloc() and friends.
Signed-off-by: Chris Packham <[email protected]> Acked-by: Mike Rapoport <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
| #
094ef1c9 |
| 24-Oct-2019 |
Chris Packham <[email protected]> |
docs/core-api: memory-allocation: remove uses of c:func:
These are no longer needed as the documentation build will automatically add the cross references.
Signed-off-by: Chris Packham <chris.packh
docs/core-api: memory-allocation: remove uses of c:func:
These are no longer needed as the documentation build will automatically add the cross references.
Signed-off-by: Chris Packham <[email protected]> Acked-by: Mike Rapoport <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
| #
ef8330fe |
| 24-Oct-2019 |
Chris Packham <[email protected]> |
docs/core-api: memory-allocation: fix typo
"on the safe size" should be "on the safe side".
Signed-off-by: Chris Packham <[email protected]> Acked-by: Mike Rapoport <[email protected].
docs/core-api: memory-allocation: fix typo
"on the safe size" should be "on the safe side".
Signed-off-by: Chris Packham <[email protected]> Acked-by: Mike Rapoport <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
|
Revision tags: v5.4-rc4, v5.4-rc3 |
|
| #
59bb4798 |
| 07-Oct-2019 |
Vlastimil Babka <[email protected]> |
mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)
In most configurations, kmalloc() happens to return naturally aligned (i.e. aligned to the block size itself) blocks for power of
mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)
In most configurations, kmalloc() happens to return naturally aligned (i.e. aligned to the block size itself) blocks for power of two sizes.
That means some kmalloc() users might unknowingly rely on that alignment, until stuff breaks when the kernel is built with e.g. CONFIG_SLUB_DEBUG or CONFIG_SLOB, and blocks stop being aligned. Then developers have to devise workaround such as own kmem caches with specified alignment [1], which is not always practical, as recently evidenced in [2].
The topic has been discussed at LSF/MM 2019 [3]. Adding a 'kmalloc_aligned()' variant would not help with code unknowingly relying on the implicit alignment. For slab implementations it would either require creating more kmalloc caches, or allocate a larger size and only give back part of it. That would be wasteful, especially with a generic alignment parameter (in contrast with a fixed alignment to size).
Ideally we should provide to mm users what they need without difficult workarounds or own reimplementations, so let's make the kmalloc() alignment to size explicitly guaranteed for power-of-two sizes under all configurations. What this means for the three available allocators?
* SLAB object layout happens to be mostly unchanged by the patch. The implicitly provided alignment could be compromised with CONFIG_DEBUG_SLAB due to redzoning, however SLAB disables redzoning for caches with alignment larger than unsigned long long. Practically on at least x86 this includes kmalloc caches as they use cache line alignment, which is larger than that. Still, this patch ensures alignment on all arches and cache sizes.
* SLUB layout is also unchanged unless redzoning is enabled through CONFIG_SLUB_DEBUG and boot parameter for the particular kmalloc cache. With this patch, explicit alignment is guaranteed with redzoning as well. This will result in more memory being wasted, but that should be acceptable in a debugging scenario.
* SLOB has no implicit alignment so this patch adds it explicitly for kmalloc(). The potential downside is increased fragmentation. While pathological allocation scenarios are certainly possible, in my testing, after booting a x86_64 kernel+userspace with virtme, around 16MB memory was consumed by slab pages both before and after the patch, with difference in the noise.
[1] https://lore.kernel.org/linux-btrfs/c3157c8e8e0e7588312b40c853f65c02fe6c957a.1566399731.git.christophe.leroy@c-s.fr/ [2] https://lore.kernel.org/linux-fsdevel/[email protected]/ [3] https://lwn.net/Articles/787740/
[[email protected]: documentation fixlet, per Matthew] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Cc: David Sterba <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Ming Lei <[email protected]> Cc: Dave Chinner <[email protected]> Cc: "Darrick J . Wong" <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Bottomley <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5 |
|
| #
cd7198fc |
| 31-Jan-2019 |
Tobin C. Harding <[email protected]> |
docs: Use underscore not hyphen in label
sphinx emits warning
WARNING: undefined label: memory-allocation ...
This seems to be caused by the use of a hyphen in the label name instead of an
docs: Use underscore not hyphen in label
sphinx emits warning
WARNING: undefined label: memory-allocation ...
This seems to be caused by the use of a hyphen in the label name instead of an underscore. Using an underscore for the label name and the reference clears the warning.
Use underscore not hyphen in label and reference.
Signed-off-by: Tobin C. Harding <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
|
Revision tags: v5.0-rc4, v5.0-rc3 |
|
| #
98e5f349 |
| 14-Jan-2019 |
Mike Rapoport <[email protected]> |
docs/core-api: memory-allocation: add mention of kmem_cache_create_userspace
Mention that when a part of a slab cache might be exported to the userspace, the cache should be created using kmem_cache
docs/core-api: memory-allocation: add mention of kmem_cache_create_userspace
Mention that when a part of a slab cache might be exported to the userspace, the cache should be created using kmem_cache_create_usercopy()
Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
|
Revision tags: v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2 |
|
| #
01598ba6 |
| 11-Nov-2018 |
Mike Rapoport <[email protected]> |
docs/mm: update kmalloc kernel-doc description
Add references to GFP documentation and the memory-allocation.rst and remove GFP_USER, GFP_DMA and GFP_NOIO descriptions.
While on it slightly change
docs/mm: update kmalloc kernel-doc description
Add references to GFP documentation and the memory-allocation.rst and remove GFP_USER, GFP_DMA and GFP_NOIO descriptions.
While on it slightly change the formatting so that the list of GFP flags will be rendered as "description" in the generated html.
Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
| #
acf0f57a |
| 19-Nov-2018 |
Matthew Wilcox <[email protected]> |
Link the memory allocation guide from the MM docs
I just went looking for the memory allocation guide in the MM docs instead of in the core API. For the benefit of the next person who makes that mi
Link the memory allocation guide from the MM docs
I just went looking for the memory allocation guide in the MM docs instead of in the core API. For the benefit of the next person who makes that mistake, link to it from the MM docs.
Signed-off-by: Matthew Wilcox <[email protected]> Acked-by: Mike Rapoport <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|
|
Revision tags: v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4 |
|
| #
52272c92 |
| 14-Sep-2018 |
Mike Rapoport <[email protected]> |
docs: core-api: add memory allocation guide
Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Randy Dunlap <[email protected]> Signed-off-
docs: core-api: add memory allocation guide
Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Randy Dunlap <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
show more ...
|