|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4 |
|
| #
eddf51f2 |
| 16-Dec-2024 |
David Howells <[email protected]> |
afs: Make {Y,}FS.FetchData an asynchronous operation
Make FS.FetchData and YFS.FetchData an asynchronous operation in that the request is queued in AF_RXRPC and then we return to the caller rather t
afs: Make {Y,}FS.FetchData an asynchronous operation
Make FS.FetchData and YFS.FetchData an asynchronous operation in that the request is queued in AF_RXRPC and then we return to the caller rather than waiting. Processing of the returning packets is then done inline if it's a synchronous VFS/VM call (readdir, read_folio, sync DIO, prep for write) or offloaded to a workqueue if asynchronous VM calls (eg. readahead, async DIO).
This reduces the chain of workqueues invoking workqueues and cuts out some of the overhead, driving rxrpc data extraction and netfslib read collection from a thread that's going to block to completion anyway if possible.
The ->done() call op is also split with ->immediate_cancel() handling the cancellation on failure to begin the call and ->done() handling the rest. This means that the AFS async FetchData code doesn't try to terminate the netfs subrequest twice.
Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] cc: Marc Dionne <[email protected]> cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
6dd80936 |
| 16-Dec-2024 |
David Howells <[email protected]> |
afs: Use netfslib for directories
In the AFS ecosystem, directories are just a special type of file that is downloaded and parsed locally. Download is done by the same mechanism as ordinary files a
afs: Use netfslib for directories
In the AFS ecosystem, directories are just a special type of file that is downloaded and parsed locally. Download is done by the same mechanism as ordinary files and the data can be cached. There is one important semantic restriction on directories over files: the client must download the entire directory in one go because, for example, the server could fabricate the contents of the blob on the fly with each download and give a different image each time.
So that we can cache the directory download, switch AFS directory support over to using the netfslib single-object API, thereby allowing directory content to be stored in the local cache.
To make this work, the following changes are made:
(1) A directory's contents are now stored in a folio_queue chain attached to the afs_vnode (inode) struct rather than its associated pagecache, though multipage folios are still used to hold the data. The folio queue is discarded when the directory inode is evicted.
This also helps with the phasing out of ITER_XARRAY.
(2) Various directory operations are made to use and unuse the cache cookie.
(3) The content checking, content dumping and content iteration are now performed with a standard iov_iter iterator over the contents of the folio queue.
(4) Iteration and modification must be done with the vnode's validate_lock held. In conjunction with (1), this means that the iteration can be done without the need to lock pages or take extra refs on them, unlike when accessing ->i_pages.
(5) Convert to using netfs_read_single() to read data.
(6) Provide a ->writepages() to call netfs_writeback_single() to save the data to the cache according to the VM's scheduling whilst holding the validate_lock read-locked as (4).
(7) Change local directory image editing functions:
(a) Provide a function to get a specific block by number from the folio_queue as we can no longer use the i_pages xarray to locate folios by index. This uses a cursor to remember the current position as we need to iterate through the directory contents. The block is kmapped before being returned.
(b) Make the function in (a) extend the directory by an extra folio if we run out of space.
(c) Raise the check of the block free space counter, for those blocks that have one, higher in the function to eliminate a call to get a block.
(d) Remove the page unlocking and putting done during the editing loops. This is no longer necessary as the folio_queue holds the references and the pages are no longer in the pagecache.
(e) Mark the inode dirty and pin the cache usage till writeback at the end of a successful edit.
(8) Don't set the large_folios flag on the inode as we do the allocation ourselves rather than the VM doing it automatically.
(9) Mark the inode as being a single object that isn't uploaded to the server.
(10) Enable caching on directories.
(11) Only set the upload key for writeback for regular files.
Notes:
(*) We keep the ->release_folio(), ->invalidate_folio() and ->migrate_folio() ops as we set the mapping pointer on the folio.
Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] cc: Marc Dionne <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc3 |
|
| #
4acb665c |
| 13-Dec-2024 |
David Howells <[email protected]> |
netfs: Work around recursion by abandoning retry if nothing read
syzkaller reported recursion with a loop of three calls (netfs_rreq_assess, netfs_retry_reads and netfs_rreq_terminated) hitting the
netfs: Work around recursion by abandoning retry if nothing read
syzkaller reported recursion with a loop of three calls (netfs_rreq_assess, netfs_retry_reads and netfs_rreq_terminated) hitting the limit of the stack during an unbuffered or direct I/O read.
There are a number of issues:
(1) There is no limit on the number of retries.
(2) A subrequest is supposed to be abandoned if it does not transfer anything (NETFS_SREQ_NO_PROGRESS), but that isn't checked under all circumstances.
(3) The actual root cause, which is this:
if (atomic_dec_and_test(&rreq->nr_outstanding)) netfs_rreq_terminated(rreq, ...);
When we do a retry, we bump the rreq->nr_outstanding counter to prevent the final cleanup phase running before we've finished dispatching the retries. The problem is if we hit 0, we have to do the cleanup phase - but we're in the cleanup phase and end up repeating the retry cycle, hence the recursion.
Work around the problem by limiting the number of retries. This is based on Lizhi Xu's patch[1], and makes the following changes:
(1) Replace NETFS_SREQ_NO_PROGRESS with NETFS_SREQ_MADE_PROGRESS and make the filesystem set it if it managed to read or write at least one byte of data. Clear this bit before issuing a subrequest.
(2) Add a ->retry_count member to the subrequest and increment it any time we do a retry.
(3) Remove the NETFS_SREQ_RETRYING flag as it is superfluous with ->retry_count. If the latter is non-zero, we're doing a retry.
(4) Abandon a subrequest if retry_count is non-zero and we made no progress.
(5) Use ->retry_count in both the write-side and the read-size.
[?] Question: Should I set a hard limit on retry_count in both read and write? Say it hits 50, we always abandon it. The problem is that these changes only mitigate the issue. As long as it made at least one byte of progress, the recursion is still an issue. This patch mitigates the problem, but does not fix the underlying cause. I have patches that will do that, but it's an intrusive fix that's currently pending for the next merge window.
The oops generated by KASAN looks something like:
BUG: TASK stack guard page was hit at ffffc9000482ff48 (stack is ffffc90004830000..ffffc90004838000) Oops: stack guard page: 0000 [#1] PREEMPT SMP KASAN NOPTI ... RIP: 0010:mark_lock+0x25/0xc60 kernel/locking/lockdep.c:4686 ... mark_usage kernel/locking/lockdep.c:4646 [inline] __lock_acquire+0x906/0x3ce0 kernel/locking/lockdep.c:5156 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5825 local_lock_acquire include/linux/local_lock_internal.h:29 [inline] ___slab_alloc+0x123/0x1880 mm/slub.c:3695 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3908 __slab_alloc_node mm/slub.c:3961 [inline] slab_alloc_node mm/slub.c:4122 [inline] kmem_cache_alloc_noprof+0x2a7/0x2f0 mm/slub.c:4141 radix_tree_node_alloc.constprop.0+0x1e8/0x350 lib/radix-tree.c:253 idr_get_free+0x528/0xa40 lib/radix-tree.c:1506 idr_alloc_u32+0x191/0x2f0 lib/idr.c:46 idr_alloc+0xc1/0x130 lib/idr.c:87 p9_tag_alloc+0x394/0x870 net/9p/client.c:321 p9_client_prepare_req+0x19f/0x4d0 net/9p/client.c:644 p9_client_zc_rpc.constprop.0+0x105/0x880 net/9p/client.c:793 p9_client_read_once+0x443/0x820 net/9p/client.c:1570 p9_client_read+0x13f/0x1b0 net/9p/client.c:1534 v9fs_issue_read+0x115/0x310 fs/9p/vfs_addr.c:74 netfs_retry_read_subrequests fs/netfs/read_retry.c:60 [inline] netfs_retry_reads+0x153a/0x1d00 fs/netfs/read_retry.c:232 netfs_rreq_assess+0x5d3/0x870 fs/netfs/read_collect.c:371 netfs_rreq_terminated+0xe5/0x110 fs/netfs/read_collect.c:407 netfs_retry_reads+0x155e/0x1d00 fs/netfs/read_retry.c:235 netfs_rreq_assess+0x5d3/0x870 fs/netfs/read_collect.c:371 netfs_rreq_terminated+0xe5/0x110 fs/netfs/read_collect.c:407 netfs_retry_reads+0x155e/0x1d00 fs/netfs/read_retry.c:235 netfs_rreq_assess+0x5d3/0x870 fs/netfs/read_collect.c:371 ... netfs_rreq_terminated+0xe5/0x110 fs/netfs/read_collect.c:407 netfs_retry_reads+0x155e/0x1d00 fs/netfs/read_retry.c:235 netfs_rreq_assess+0x5d3/0x870 fs/netfs/read_collect.c:371 netfs_rreq_terminated+0xe5/0x110 fs/netfs/read_collect.c:407 netfs_retry_reads+0x155e/0x1d00 fs/netfs/read_retry.c:235 netfs_rreq_assess+0x5d3/0x870 fs/netfs/read_collect.c:371 netfs_rreq_terminated+0xe5/0x110 fs/netfs/read_collect.c:407 netfs_dispatch_unbuffered_reads fs/netfs/direct_read.c:103 [inline] netfs_unbuffered_read fs/netfs/direct_read.c:127 [inline] netfs_unbuffered_read_iter_locked+0x12f6/0x19b0 fs/netfs/direct_read.c:221 netfs_unbuffered_read_iter+0xc5/0x100 fs/netfs/direct_read.c:256 v9fs_file_read_iter+0xbf/0x100 fs/9p/vfs_file.c:361 do_iter_readv_writev+0x614/0x7f0 fs/read_write.c:832 vfs_readv+0x4cf/0x890 fs/read_write.c:1025 do_preadv fs/read_write.c:1142 [inline] __do_sys_preadv fs/read_write.c:1192 [inline] __se_sys_preadv fs/read_write.c:1187 [inline] __x64_sys_preadv+0x22d/0x310 fs/read_write.c:1187 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") Closes: https://syzkaller.appspot.com/bug?extid=1fc6f64c40a9d143cfb6 Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected]/ [1] Link: https://lore.kernel.org/r/[email protected] Tested-by: [email protected] Suggested-by: Lizhi Xu <[email protected]> cc: Dominique Martinet <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] Reported-by: [email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3 |
|
| #
52d55922 |
| 07-Jun-2024 |
David Howells <[email protected]> |
netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream
Move max_len/max_nr_segs from struct netfs_io_subrequest to struct netfs_io_stream as we only issue one subreq at a time a
netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream
Move max_len/max_nr_segs from struct netfs_io_subrequest to struct netfs_io_stream as we only issue one subreq at a time and then don't need these values again for that subreq unless and until we have to retry it - in which case we want to renegotiate them.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] Link: https://lore.kernel.org/r/[email protected]/ # v2 Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1 |
|
| #
1ecb146f |
| 15-Mar-2024 |
David Howells <[email protected]> |
netfs, afs: Use writeback retry to deal with alternate keys
Use a hook in the new writeback code's retry algorithm to rotate the keys once all the outstanding subreqs have failed rather than doing i
netfs, afs: Use writeback retry to deal with alternate keys
Use a hook in the new writeback code's retry algorithm to rotate the keys once all the outstanding subreqs have failed rather than doing it separately on each subreq.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
|
Revision tags: v6.8 |
|
| #
c2458685 |
| 08-Mar-2024 |
David Howells <[email protected]> |
netfs: Remove the old writeback code
Remove the old writeback code.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: Eric Van Hensbergen <[email protected]
netfs: Remove the old writeback code
Remove the old writeback code.
Signed-off-by: David Howells <[email protected]> cc: Jeff Layton <[email protected]> cc: Eric Van Hensbergen <[email protected]> cc: Latchesar Ionkov <[email protected]> cc: Dominique Martinet <[email protected]> cc: Christian Schoenebeck <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
2df86547 |
| 08-Mar-2024 |
David Howells <[email protected]> |
netfs: Cut over to using new writeback code
Cut over to using the new writeback code. The old code is #ifdef'd out or otherwise removed from compilation to avoid conflicts and will be removed in a
netfs: Cut over to using new writeback code
Cut over to using the new writeback code. The old code is #ifdef'd out or otherwise removed from compilation to avoid conflicts and will be removed in a future patch.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Eric Van Hensbergen <[email protected]> cc: Latchesar Ionkov <[email protected]> cc: Dominique Martinet <[email protected]> cc: Christian Schoenebeck <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
ed22e1db |
| 18-Mar-2024 |
David Howells <[email protected]> |
netfs, afs: Implement helpers for new write code
Implement the helpers for the new write code in afs. There's now an optional ->prepare_write() that allows the filesystem to set the parameters for
netfs, afs: Implement helpers for new write code
Implement the helpers for the new write code in afs. There's now an optional ->prepare_write() that allows the filesystem to set the parameters for the next write, such as maximum size and maximum segment count, and an ->issue_write() that is called to initiate an (asynchronous) write operation.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
d73065e6 |
| 27-Mar-2024 |
David Howells <[email protected]> |
afs: Use alternative invalidation to using launder_folio
Use writepages-based flushing invalidation instead of invalidate_inode_pages2() and ->launder_folio(). This will allow ->launder_folio() to
afs: Use alternative invalidation to using launder_folio
Use writepages-based flushing invalidation instead of invalidate_inode_pages2() and ->launder_folio(). This will allow ->launder_folio() to be removed eventually.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
|
Revision tags: v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5 |
|
| #
3560358a |
| 15-Feb-2022 |
David Howells <[email protected]> |
afs: Use the netfs write helpers
Make afs use the netfs write helpers.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Marc Dionne <marc.dionne@
afs: Use the netfs write helpers
Make afs use the netfs write helpers.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
2de5599f |
| 26-Oct-2023 |
David Howells <[email protected]> |
afs: Wrap most op->error accesses with inline funcs
Wrap most op->error accesses with inline funcs which will make it easier for a subsequent patch to replace op->error with something else. Two fun
afs: Wrap most op->error accesses with inline funcs
Wrap most op->error accesses with inline funcs which will make it easier for a subsequent patch to replace op->error with something else. Two functions are added to this end:
(1) afs_op_error() - Get the error code.
(2) afs_op_set_error() - Set the error code.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
| #
a34847d4 |
| 02-Dec-2022 |
David Howells <[email protected]> |
afs: Don't use folio->private to record partial modification
AFS currently uses folio->private to store the range of bytes within a folio that have been modified - the idea being that if we have, sa
afs: Don't use folio->private to record partial modification
AFS currently uses folio->private to store the range of bytes within a folio that have been modified - the idea being that if we have, say, a 2MiB folio and someone writes a single byte, we only have to write back that single page and not the whole 2MiB folio - thereby saving on network bandwidth.
Remove this, at least for now, and accept the extra network load (which doesn't matter in the common case of writing a whole file at a time from beginning to end).
This makes folio->private available for netfslib to use.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
c9c4ff12 |
| 27-Nov-2023 |
David Howells <[email protected]> |
netfs: Move pinning-for-writeback from fscache to netfs
Move the resource pinning-for-writeback from fscache code to netfslib code. This is used to keep a cache backing object pinned whilst we have
netfs: Move pinning-for-writeback from fscache to netfs
Move the resource pinning-for-writeback from fscache code to netfslib code. This is used to keep a cache backing object pinned whilst we have dirty pages on the netfs inode in the pagecache such that VM writeback will be able to reach it.
Whilst we're at it, switch the parameters of netfs_unpin_writeback() to match ->write_inode() so that it can be used for that directly.
Note that this mechanism could be more generically useful than that for network filesystems. Quite often they have to keep around other resources (e.g. authentication tokens or network connections) until the writeback is complete.
Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected]
show more ...
|
| #
af7628d6 |
| 17-Nov-2023 |
Matthew Wilcox (Oracle) <[email protected]> |
fs: convert error_remove_page to error_remove_folio
There were already assertions that we were not passing a tail page to error_remove_page(), so make the compiler enforce that by converting everyth
fs: convert error_remove_page to error_remove_folio
There were already assertions that we were not passing a tail page to error_remove_page(), so make the compiler enforce that by converting everything to pass and use a folio.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Cc: Naoya Horiguchi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
8525d598 |
| 08-Nov-2023 |
Matthew Wilcox (Oracle) <[email protected]> |
afs: do not test the return value of folio_start_writeback()
In preparation for removing the return value entirely, stop testing it in afs.
Link: https://lkml.kernel.org/r/20231108204605.745109-3-w
afs: do not test the return value of folio_start_writeback()
In preparation for removing the return value entirely, stop testing it in afs.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Cc: David Howells <[email protected]> Cc: Steve French <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
562ce1f7 |
| 04-Oct-2023 |
Jeff Layton <[email protected]> |
afs: convert to new timestamp accessors
Convert to using the new inode timestamp accessor functions.
Signed-off-by: Jeff Layton <[email protected]> Link: https://lore.kernel.org/r/20231004185347.8
afs: convert to new timestamp accessors
Convert to using the new inode timestamp accessor functions.
Signed-off-by: Jeff Layton <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
03275585 |
| 04-Jul-2023 |
David Howells <[email protected]> |
afs: Fix accidental truncation when storing data
When an AFS FS.StoreData RPC call is made, amongst other things it is given the resultant file size to be. On the server, this is processed by trunc
afs: Fix accidental truncation when storing data
When an AFS FS.StoreData RPC call is made, amongst other things it is given the resultant file size to be. On the server, this is processed by truncating the file to new size and then writing the data.
Now, kafs has a lock (vnode->io_lock) that serves to serialise operations against a specific vnode (ie. inode), but the parameters for the op are set before the lock is taken. This allows two writebacks (say sync and kswapd) to race - and if writes are ongoing the writeback for a later write could occur before the writeback for an earlier one if the latter gets interrupted.
Note that afs_writepages() cannot take i_mutex and only takes a shared lock on vnode->validate_lock.
Also note that the server does the truncation and the write inside a lock, so there's no problem at that end.
Fix this by moving the calculation for the proposed new i_size inside the vnode->io_lock. Also reset the iterator (which we might have read from) and update the mtime setting there.
Fixes: bd80d8a80e12 ("afs: Use ITER_XARRAY for writing") Reported-by: Marc Dionne <[email protected]> Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeffrey Altman <[email protected]> Reviewed-by: Marc Dionne <[email protected]> cc: [email protected] cc: [email protected] Link: https://lore.kernel.org/r/[email protected]/ Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
f5f288a0 |
| 21-Jun-2023 |
Matthew Wilcox (Oracle) <[email protected]> |
afs: convert pagevec to folio_batch in afs_extend_writeback()
Patch series "Remove pagevecs".
Removes a folio->page->folio conversion for each folio that's involved. More importantly, removes one
afs: convert pagevec to folio_batch in afs_extend_writeback()
Patch series "Remove pagevecs".
Removes a folio->page->folio conversion for each folio that's involved. More importantly, removes one of the last few uses of a pagevec.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
819da022 |
| 07-Jun-2023 |
Vishal Moola (Oracle) <[email protected]> |
afs: Fix waiting for writeback then skipping folio
Commit acc8d8588cb7 converted afs_writepages_region() to write back a folio batch. The function waits for writeback to a folio, but then proceeds t
afs: Fix waiting for writeback then skipping folio
Commit acc8d8588cb7 converted afs_writepages_region() to write back a folio batch. The function waits for writeback to a folio, but then proceeds to the rest of the batch without trying to write that folio again. This patch fixes has it attempt to write the folio again.
[DH: Also remove an 'else' that adding a goto makes redundant]
Fixes: acc8d8588cb7 ("afs: convert afs_writepages_region() to use filemap_get_folios_tag()") Signed-off-by: Vishal Moola (Oracle) <[email protected]> Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Link: https://lore.kernel.org/r/[email protected]/
show more ...
|
| #
a2b6f2ab |
| 07-Jun-2023 |
Vishal Moola (Oracle) <[email protected]> |
afs: Fix dangling folio ref counts in writeback
Commit acc8d8588cb7 converted afs_writepages_region() to write back a folio batch. If writeback needs rescheduling, the function exits without droppin
afs: Fix dangling folio ref counts in writeback
Commit acc8d8588cb7 converted afs_writepages_region() to write back a folio batch. If writeback needs rescheduling, the function exits without dropping the references to the folios in fbatch. This patch fixes that.
[DH: Moved the added line before the _leave()]
Fixes: acc8d8588cb7 ("afs: convert afs_writepages_region() to use filemap_get_folios_tag()") Signed-off-by: Vishal Moola (Oracle) <[email protected]> Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Link: https://lore.kernel.org/r/[email protected]/
show more ...
|
| #
66dabbb6 |
| 07-Mar-2023 |
Christoph Hellwig <[email protected]> |
mm: return an ERR_PTR from __filemap_get_folio
Instead of returning NULL for all errors, distinguish between:
- no entry found and not asked to allocated (-ENOENT) - failed to allocate memory (-E
mm: return an ERR_PTR from __filemap_get_folio
Instead of returning NULL for all errors, distinguish between:
- no entry found and not asked to allocated (-ENOENT) - failed to allocate memory (-ENOMEM) - would block (-EAGAIN)
so that callers don't have to guess the error based on the passed in flags.
Also pass through the error through the direct callers: filemap_get_folio, filemap_lock_folio filemap_grab_folio and filemap_get_incore_folio.
[[email protected]: fix null-pointer deref] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/20230310043137.GA1624890@u2004 Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Ryusuke Konishi <[email protected]> [nilfs2] Cc: Andreas Gruenbacher <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Naoya Horiguchi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
a8173be1 |
| 03-Feb-2023 |
Christoph Hellwig <[email protected]> |
afs: use bvec_set_folio to initialize a bvec
Use the bvec_set_folio helper to initialize a bvec.
Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: David Howells <[email protected]> Link: ht
afs: use bvec_set_folio to initialize a bvec
Use the bvec_set_folio helper to initialize a bvec.
Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
acc8d858 |
| 04-Jan-2023 |
Vishal Moola (Oracle) <[email protected]> |
afs: convert afs_writepages_region() to use filemap_get_folios_tag()
Convert to use folios throughout. This function is in preparation to remove find_get_pages_range_tag().
Also modify this functi
afs: convert afs_writepages_region() to use filemap_get_folios_tag()
Convert to use folios throughout. This function is in preparation to remove find_get_pages_range_tag().
Also modify this function to write the whole batch one at a time, rather than calling for a new set every single write.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vishal Moola (Oracle) <[email protected]> Tested-by: David Howells <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
a9eb558a |
| 18-Nov-2022 |
David Howells <[email protected]> |
afs: Stop implementing ->writepage()
We're trying to get rid of the ->writepage() hook[1]. Stop afs from using it by unlocking the page and calling afs_writepages_region() rather than folio_write_o
afs: Stop implementing ->writepage()
We're trying to get rid of the ->writepage() hook[1]. Stop afs from using it by unlocking the page and calling afs_writepages_region() rather than folio_write_one().
A flag is passed to afs_writepages_region() to indicate that it should only write a single region so that we don't flush the entire file in ->write_begin(), but do add other dirty data to the region being written to try and reduce the number of RPC ops.
This requires ->migrate_folio() to be implemented, so point that at filemap_migrate_folio() for files and also for symlinks and directories.
This can be tested by turning on the afs_folio_dirty tracepoint and then doing something like:
xfs_io -c "w 2223 7000" -c "w 15000 22222" -c "w 23 7" /afs/my/test/foo
and then looking in the trace to see if the write at position 15000 gets stored before page 0 gets dirtied for the write at position 23.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: Christoph Hellwig <[email protected]> cc: Matthew Wilcox <[email protected]> cc: [email protected] Link: https://lore.kernel.org/r/[email protected]/ [1] Link: https://lore.kernel.org/r/166876785552.222254.4403222906022558715.stgit@warthog.procyon.org.uk/ # v1
show more ...
|
| #
de4eda9d |
| 16-Sep-2022 |
Al Viro <[email protected]> |
use less confusing names for iov_iter direction initializers
READ/WRITE proved to be actively confusing - the meanings are "data destination, as used with read(2)" and "data source, as used with wri
use less confusing names for iov_iter direction initializers
READ/WRITE proved to be actively confusing - the meanings are "data destination, as used with read(2)" and "data source, as used with write(2)", but people keep interpreting those as "we read data from it" and "we write data to it", i.e. exactly the wrong way.
Call them ITER_DEST and ITER_SOURCE - at least that is harder to misinterpret...
Signed-off-by: Al Viro <[email protected]>
show more ...
|