|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7 |
|
| #
3f6bc9e3 |
| 07-Jan-2025 |
David Howells <[email protected]> |
netfs: Fix kernel async DIO
Netfslib needs to be able to handle kernel-initiated asynchronous DIO that is supplied with a bio_vec[] array. Currently, because of the async flag, this gets passed to
netfs: Fix kernel async DIO
Netfslib needs to be able to handle kernel-initiated asynchronous DIO that is supplied with a bio_vec[] array. Currently, because of the async flag, this gets passed to netfs_extract_user_iter() which throws a warning and fails because it only handles IOVEC and UBUF iterators. This can be triggered through a combination of cifs and a loopback blockdev with something like:
mount //my/cifs/share /foo dd if=/dev/zero of=/foo/m0 bs=4K count=1K losetup --sector-size 4096 --direct-io=on /dev/loop2046 /foo/m0 echo hello >/dev/loop2046
This causes the following to appear in syslog:
WARNING: CPU: 2 PID: 109 at fs/netfs/iterator.c:50 netfs_extract_user_iter+0x170/0x250 [netfs]
and the write to fail.
Fix this by removing the check in netfs_unbuffered_write_iter_locked() that causes async kernel DIO writes to be handled as userspace writes. Note that this change relies on the kernel caller maintaining the existence of the bio_vec array (or kvec[] or folio_queue) until the op is complete.
Fixes: 153a9961b551 ("netfs: Implement unbuffered/DIO write support") Reported-by: Nicolas Baranger <[email protected]> Closes: https://lore.kernel.org/r/[email protected]/ Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] Tested-by: Nicolas Baranger <[email protected]> Acked-by: Paulo Alcantara (Red Hat) <[email protected]> cc: Steve French <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc6, v6.13-rc5, v6.13-rc4 |
|
| #
06fa229c |
| 16-Dec-2024 |
David Howells <[email protected]> |
netfs: Abstract out a rolling folio buffer implementation
A rolling buffer is a series of folios held in a list of folio_queues. New folios and folio_queue structs may be inserted at the head simul
netfs: Abstract out a rolling folio buffer implementation
A rolling buffer is a series of folios held in a list of folio_queues. New folios and folio_queue structs may be inserted at the head simultaneously with spent ones being removed from the tail without the need for locking.
The rolling buffer includes an iov_iter and it has to be careful managing this as the list of folio_queues is extended such that an oops doesn't incurred because the iterator was pointing to the end of a folio_queue segment that got appended to and then removed.
We need to use the mechanism twice, once for read and once for write, and, in future patches, we will use a second rolling buffer to handle bounce buffering for content encryption.
Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc3 |
|
| #
f4d3cde4 |
| 13-Dec-2024 |
Zilin Guan <[email protected]> |
netfs: Remove redundant use of smp_rmb()
The function netfs_unbuffered_write_iter_locked() in fs/netfs/direct_write.c contains an unnecessary smp_rmb() call after wait_on_bit(). Since wait_on_bit()
netfs: Remove redundant use of smp_rmb()
The function netfs_unbuffered_write_iter_locked() in fs/netfs/direct_write.c contains an unnecessary smp_rmb() call after wait_on_bit(). Since wait_on_bit() already incorporates a memory barrier that ensures the flag update is visible before the function returns, the smp_rmb() provides no additional benefit and incurs unnecessary overhead.
This patch removes the redundant barrier to simplify and optimize the code.
Signed-off-by: Zilin Guan <[email protected]> Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected]/ Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Akira Yokosawa <[email protected]> cc: Akira Yokosawa <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1 |
|
| #
a9d47a50 |
| 18-Jul-2024 |
David Howells <[email protected]> |
netfs: Revert "netfs: Switch debug logging to pr_debug()"
Revert commit 163eae0fb0d4c610c59a8de38040f8e12f89fd43 to get back the original operation of the debugging macros.
Signed-off-by: David How
netfs: Revert "netfs: Switch debug logging to pr_debug()"
Revert commit 163eae0fb0d4c610c59a8de38040f8e12f89fd43 to get back the original operation of the debugging macros.
Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected] cc: Uwe Kleine-König <[email protected]> cc: Christian Brauner <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1 |
|
| #
d98b7d7d |
| 20-May-2024 |
David Howells <[email protected]> |
netfs: Fix io_uring based write-through
[This was included in v2 of 9b038d004ce95551cb35381c49fe896c5bc11ffe, but v1 got pushed instead]
Fix netfs_unbuffered_write_iter_locked() to set the total re
netfs: Fix io_uring based write-through
[This was included in v2 of 9b038d004ce95551cb35381c49fe896c5bc11ffe, but v1 got pushed instead]
Fix netfs_unbuffered_write_iter_locked() to set the total request length in the netfs_io_request struct rather than leaving it as zero.
Fixes: 288ace2f57c9 ("netfs: New writeback implementation") Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: Steve French <[email protected]> cc: Enzo Matsumiya <[email protected]> cc: Christian Brauner <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
163eae0f |
| 08-Jun-2024 |
Uwe Kleine-König <[email protected]> |
netfs: Switch debug logging to pr_debug()
Instead of inventing a custom way to conditionally enable debugging, just make use of pr_debug(), which also has dynamic debugging facilities and is more li
netfs: Switch debug logging to pr_debug()
Instead of inventing a custom way to conditionally enable debugging, just make use of pr_debug(), which also has dynamic debugging facilities and is more likely known to someone who hunts a problem in the netfs code. Also drop the module parameter netfs_debug which didn't have any effect without further source changes. (The variable netfs_debug was only used in #ifdef blocks for cpp vars that don't exist; Note that CONFIG_NETFS_DEBUG isn't settable via kconfig, a variable with that name never existed in the mainline and is probably just taken over (and renamed) from similar custom debug logging implementations.)
Signed-off-by: Uwe Kleine-König <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
9b038d00 |
| 21-May-2024 |
David Howells <[email protected]> |
netfs: Fix io_uring based write-through
This can be triggered by mounting a cifs filesystem with a cache=strict mount option and then, using the fsx program from xfstests, doing:
ltp/fsx -A
netfs: Fix io_uring based write-through
This can be triggered by mounting a cifs filesystem with a cache=strict mount option and then, using the fsx program from xfstests, doing:
ltp/fsx -A -d -N 1000 -S 11463 -P /tmp /cifs-mount/foo \ --replay-ops=gen112-fsxops
Where gen112-fsxops holds:
fallocate 0x6be7 0x8fc5 0x377d3 copy_range 0x9c71 0x77e8 0x2edaf 0x377d3 write 0x2776d 0x8f65 0x377d3
The problem is that netfs_io_request::len is being used for two purposes and ends up getting set to the amount of data we transferred, not the amount of data the caller asked to be transferred (for various reasons, such as mmap'd writes, we might end up rounding out the data written to the server to include the entire folio at each end).
Fix this by keeping the amount we were asked to write in ->len and using ->submitted to track what we issued ops for. Then, when we come to calling ->ki_complete(), ->len is the right size.
This also required netfs_cleanup_dio_write() to change since we're no longer advancing wreq->len. Use wreq->transferred instead as we might have done a short read.
With this, the generic/112 xfstest passes if cifs is forced to put all non-DIO opens into write-through mode.
Fixes: 288ace2f57c9 ("netfs: New writeback implementation") Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] cc: Jeff Layton <[email protected]> cc: Steve French <[email protected]> cc: Enzo Matsumiya <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
16e00683 |
| 15-May-2024 |
Steve French <[email protected]> |
smb3: reenable swapfiles over SMB3 mounts
With the changes to folios/netfs it is now easier to reenable swapfile support over SMB3 which fixes various xfstests
Reviewed-by: David Howells <dhowells@
smb3: reenable swapfiles over SMB3 mounts
With the changes to folios/netfs it is now easier to reenable swapfile support over SMB3 which fixes various xfstests
Reviewed-by: David Howells <[email protected]> Suggested-by: David Howells <[email protected]> Fixes: e1209d3a7a67 ("mm: introduce ->swap_rw and use it for reads from SWP_FS_OPS swap-space") Signed-off-by: Steve French <[email protected]>
show more ...
|
|
Revision tags: v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8 |
|
| #
2df86547 |
| 08-Mar-2024 |
David Howells <[email protected]> |
netfs: Cut over to using new writeback code
Cut over to using the new writeback code. The old code is #ifdef'd out or otherwise removed from compilation to avoid conflicts and will be removed in a
netfs: Cut over to using new writeback code
Cut over to using the new writeback code. The old code is #ifdef'd out or otherwise removed from compilation to avoid conflicts and will be removed in a future patch.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Eric Van Hensbergen <[email protected]> cc: Latchesar Ionkov <[email protected]> cc: Dominique Martinet <[email protected]> cc: Christian Schoenebeck <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
4824e591 |
| 26-Mar-2024 |
David Howells <[email protected]> |
netfs: Add some write-side stats and clean up some stat names
Add some write-side stats to count buffered writes, buffered writethrough, and writepages calls.
Whilst we're at it, clean up the namin
netfs: Add some write-side stats and clean up some stat names
Add some write-side stats to count buffered writes, buffered writethrough, and writepages calls.
Whilst we're at it, clean up the naming on some of the existing stats counters and organise the output into two sets.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected]
show more ...
|
| #
74e797d7 |
| 27-Mar-2024 |
David Howells <[email protected]> |
mm: Provide a means of invalidation without using launder_folio
Implement a replacement for launder_folio. The key feature of invalidate_inode_pages2() is that it locks each folio individually, unm
mm: Provide a means of invalidation without using launder_folio
Implement a replacement for launder_folio. The key feature of invalidate_inode_pages2() is that it locks each folio individually, unmaps it to prevent mmap'd accesses interfering and calls the ->launder_folio() address_space op to flush it. This has problems: firstly, each folio is written individually as one or more small writes; secondly, adjacent folios cannot be added so easily into the laundry; thirdly, it's yet another op to implement.
Instead, use the invalidate lock to cause anyone wanting to add a folio to the inode to wait, then unmap all the folios if we have mmaps, then, conditionally, use ->writepages() to flush any dirty data back and then discard all pages.
The invalidate lock prevents ->read_iter(), ->write_iter() and faulting through mmap all from adding pages for the duration.
This is then used from netfslib to handle the flusing in unbuffered and direct writes.
Signed-off-by: David Howells <[email protected]> cc: Matthew Wilcox <[email protected]> cc: Miklos Szeredi <[email protected]> cc: Trond Myklebust <[email protected]> cc: Christoph Hellwig <[email protected]> cc: Andrew Morton <[email protected]> cc: Alexander Viro <[email protected]> cc: Christian Brauner <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
|
Revision tags: v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3 |
|
| #
ca9ca1a5 |
| 29-Jan-2024 |
David Howells <[email protected]> |
netfs: Fix missing zero-length check in unbuffered write
Fix netfs_unbuffered_write_iter() to return immediately if generic_write_checks() returns 0, indicating there's nothing to write. Note that n
netfs: Fix missing zero-length check in unbuffered write
Fix netfs_unbuffered_write_iter() to return immediately if generic_write_checks() returns 0, indicating there's nothing to write. Note that netfs_file_write_iter() already does this.
Also, whilst we're at it, put in checks for the size being zero before we even take the locks. Note that generic_write_checks() can still reduce the size to zero, so we still need that check.
Without this, a warning similar to the following is logged to dmesg:
netfs: Zero-sized write [R=1b6da]
and the syscall fails with EIO, e.g.:
/sbin/ldconfig.real: Writing of cache extension data failed: Input/output error
This can be reproduced on 9p by:
xfs_io -f -c 'pwrite 0 0' /xfstest.test/foo
Fixes: 153a9961b551 ("netfs: Implement unbuffered/DIO write support") Reported-by: Eric Van Hensbergen <[email protected]> Link: https://lore.kernel.org/r/ZbQUU6QKmIftKsmo@FV7GG9FTHL/ Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] Tested-by: Dominique Martinet <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Dominique Martinet <[email protected]> cc: Jeff Layton <[email protected]> cc: <[email protected]> cc: <[email protected]> cc: <[email protected]> cc: <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc2, v6.8-rc1, v6.7 |
|
| #
4088e389 |
| 05-Jan-2024 |
David Howells <[email protected]> |
netfs: Count DIO writes
Provide a counter for DIO writes to match that for DIO reads.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: linux-cachefs@redha
netfs: Count DIO writes
Provide a counter for DIO writes to match that for DIO reads.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
0e4d464c |
| 05-Jan-2024 |
David Howells <[email protected]> |
netfs: Mark netfs_unbuffered_write_iter_locked() static
Mark netfs_unbuffered_write_iter_locked() static as it's only called from the file in which it is defined.
Signed-off-by: David Howells <dhow
netfs: Mark netfs_unbuffered_write_iter_locked() static
Mark netfs_unbuffered_write_iter_locked() static as it's only called from the file in which it is defined.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
|
Revision tags: v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3 |
|
| #
100ccd18 |
| 24-Nov-2023 |
David Howells <[email protected]> |
netfs: Optimise away reads above the point at which there can be no data
Track the file position above which the server is not expected to have any data (the "zero point") and preemptively assume th
netfs: Optimise away reads above the point at which there can be no data
Track the file position above which the server is not expected to have any data (the "zero point") and preemptively assume that we can satisfy requests by filling them with zeroes locally rather than attempting to download them if they're over that line - even if we've written data back to the server. Assume that any data that was written back above that position is held in the local cache. Note that we have to split requests that straddle the line.
Make use of this to optimise away some reads from the server. We need to set the zero point in the following circumstances:
(1) When we see an extant remote inode and have no cache for it, we set the zero_point to i_size.
(2) On local inode creation, we set zero_point to 0.
(3) On local truncation down, we reduce zero_point to the new i_size if the new i_size is lower.
(4) On local truncation up, we don't change zero_point.
(5) On local modification, we don't change zero_point.
(6) On remote invalidation, we set zero_point to the new i_size.
(7) If stored data is discarded from the pagecache or culled from fscache, we must set zero_point above that if the data also got written to the server.
(8) If dirty data is written back to the server, but not fscache, we must set zero_point above that.
(9) If a direct I/O write is made, set zero_point above that.
Assuming the above, any read from the server at or above the zero_point position will return all zeroes.
The zero_point value can be stored in the cache, provided the above rules are applied to it by any code that culls part of the local cache.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
|
Revision tags: v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6 |
|
| #
153a9961 |
| 21-Feb-2022 |
David Howells <[email protected]> |
netfs: Implement unbuffered/DIO write support
Implement support for unbuffered writes and direct I/O writes. If the write is misaligned with respect to the fscrypt block size, then RMW cycles are p
netfs: Implement unbuffered/DIO write support
Implement support for unbuffered writes and direct I/O writes. If the write is misaligned with respect to the fscrypt block size, then RMW cycles are performed if necessary. DIO writes are a special case of unbuffered writes with extra restriction imposed, such as block size alignment requirements.
Also provide a field that can tell the code to add some extra space onto the bounce buffer for use by the filesystem in the case of a content-encrypted file.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|