|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6 |
|
| #
146acfd0 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: rely on io_prep_reg_vec for iovec placement
All vectored reg buffer users should use io_import_reg_vec() for iovec imports, since iovec placement is the function's responsibility and calle
io_uring: rely on io_prep_reg_vec for iovec placement
All vectored reg buffer users should use io_import_reg_vec() for iovec imports, since iovec placement is the function's responsibility and callers shouldn't know much about it, drop the offset parameter from io_prep_reg_vec() and calculate it inside.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/08ed87ca4bbc06724373b6ce06f36b703fe60c4e.1741457480.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
d291fb65 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: introduce io_prep_reg_iovec()
iovecs that are turned into registered buffers are imported in a special way with an offset, so that later we can do an in place translation. Add a helper fun
io_uring: introduce io_prep_reg_iovec()
iovecs that are turned into registered buffers are imported in a special way with an offset, so that later we can do an in place translation. Add a helper function taking care of it.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/7de2ecb9ed5efc3c5cf320232236966da5ad4ccc.1741457480.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
5027d024 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: unify STOP_MULTISHOT with IOU_OK
IOU_OK means that the request ownership is now handed back to core io_uring and it has to complete it using the result provided in req->cqe. Same is true f
io_uring: unify STOP_MULTISHOT with IOU_OK
IOU_OK means that the request ownership is now handed back to core io_uring and it has to complete it using the result provided in req->cqe. Same is true for multishot and IOU_STOP_MULTISHOT.
Rename it into IOU_COMPLETE to avoid confusion and use for both modes.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/e6a5b2edb0eb9558acb1c8f1db38ac45fee95491.1741453534.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
7a9dcb05 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: return -EAGAIN to continue multishot
Multishot errors can be mapped 1:1 to normal errors, but there are not identical. It leads to a peculiar situation where all multishot requests has to
io_uring: return -EAGAIN to continue multishot
Multishot errors can be mapped 1:1 to normal errors, but there are not identical. It leads to a peculiar situation where all multishot requests has to check in what context they're run and return different codes.
Unify them starting with EAGAIN / IOU_ISSUE_SKIP_COMPLETE(EIOCBQUEUED) pair, which mean that core io_uring still owns the request and it should be retried. In case of multishot it's naturally just continues to poll, otherwise it might poll, use iowq or do any other kind of allowed blocking. Introduce IOU_RETRY aliased to -EAGAIN for that.
Apart from obvious upsides, multishot can now also check for misuse of IOU_ISSUE_SKIP_COMPLETE.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/da117b79ce72ecc3ab488c744e29fae9ba54e23b.1741453534.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
0396ad37 |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: cap cached iovec/bvec size
Bvecs can be large, put an arbitrary limit on the max vector size it can cache.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel
io_uring: cap cached iovec/bvec size
Bvecs can be large, put an arbitrary limit on the max vector size it can cache.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/823055fa6628daa24bbc9cd77c2da87e9a1e1e32.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
835c4bdf |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: defer reg buf vec import
Import registered buffers for vectored reads and writes later at issue time as we now do for other fixed ops.
Signed-off-by: Pavel Begunkov <asml.silence@gmail
io_uring/rw: defer reg buf vec import
Import registered buffers for vectored reads and writes later at issue time as we now do for other fixed ops.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/e8491c976e4ab83a4e3dc428e9fe7555e59583b8.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
bdabba04 |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: implement vectored registered rw
Implement registered buffer vectored reads with new opcodes IORING_OP_WRITEV_FIXED and IORING_OP_READV_FIXED.
Signed-off-by: Pavel Begunkov <asml.silen
io_uring/rw: implement vectored registered rw
Implement registered buffer vectored reads with new opcodes IORING_OP_WRITEV_FIXED and IORING_OP_READV_FIXED.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/d7c89eb481e870f598edc91cc66ff4d1e4ae3788.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
e1d49959 |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: introduce struct iou_vec
I need a convenient way to pass around and work with iovec+size pair, put them into a structure and makes use of it in rw.c
Signed-off-by: Pavel Begunkov <asml.si
io_uring: introduce struct iou_vec
I need a convenient way to pass around and work with iovec+size pair, put them into a structure and makes use of it in rw.c
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/d39fadafc9e9047b0a292e5be6db3cf2f48bb1f7.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
bcb0fda3 |
| 05-Mar-2025 |
Jens Axboe <[email protected]> |
io_uring/rw: ensure reissue path is correctly handled for IOPOLL
The IOPOLL path posts CQEs when the io_kiocb is marked as completed, so it cannot rely on the usual retry that non-IOPOLL requests do
io_uring/rw: ensure reissue path is correctly handled for IOPOLL
The IOPOLL path posts CQEs when the io_kiocb is marked as completed, so it cannot rely on the usual retry that non-IOPOLL requests do for read/write requests.
If -EAGAIN is received and the request should be retried, go through the normal completion path and let the normal flush logic catch it and reissue it, like what is done for !IOPOLL reads or writes.
Fixes: d803d123948f ("io_uring/rw: handle -EAGAIN retry at IO completion time") Reported-by: John Garry <[email protected]> Link: https://lore.kernel.org/io-uring/[email protected]/ Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc5 |
|
| #
27cb27b6 |
| 27-Feb-2025 |
Keith Busch <[email protected]> |
io_uring: add support for kernel registered bvecs
Provide an interface for the kernel to leverage the existing pre-registered buffers that io_uring provides. User space can reference these later to
io_uring: add support for kernel registered bvecs
Provide an interface for the kernel to leverage the existing pre-registered buffers that io_uring provides. User space can reference these later to achieve zero-copy IO.
User space must register an empty fixed buffer table with io_uring in order for the kernel to make use of it.
Signed-off-by: Keith Busch <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
ff92d824 |
| 27-Feb-2025 |
Keith Busch <[email protected]> |
io_uring/rw: move fixed buffer import to issue path
Registered buffers may depend on a linked command, which makes the prep path too early to import. Move to the issue path when the node is actually
io_uring/rw: move fixed buffer import to issue path
Registered buffers may depend on a linked command, which makes the prep path too early to import. Move to the issue path when the node is actually needed like all the other users of fixed buffers.
Signed-off-by: Keith Busch <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
2a61e638 |
| 27-Feb-2025 |
Keith Busch <[email protected]> |
io_uring/rw: move buffer_select outside generic prep
Cleans up the generic rw prep to not require the do_import flag. Use a different prep function for callers that might need buffer select.
Based-
io_uring/rw: move buffer_select outside generic prep
Cleans up the generic rw prep to not require the do_import flag. Use a different prep function for callers that might need buffer select.
Based-on-a-patch-by: Jens Axboe <[email protected]> Signed-off-by: Keith Busch <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
5d309914 |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring: combine buffer lookup and import
Registered buffer are currently imported in two steps, first we lookup a rsrc node and then use it to set up the iterator. The first part is usually done a
io_uring: combine buffer lookup and import
Registered buffer are currently imported in two steps, first we lookup a rsrc node and then use it to set up the iterator. The first part is usually done at the prep stage, and import happens whenever it's needed. As we want to defer binding to a node so that it works with linked requests, combine both steps into a single helper.
Reviewed-by: Keith Busch <[email protected]> Signed-off-by: Pavel Begunkov <[email protected]> Reviewed-by: Ming Lei <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
7a9b0d69 |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: open code io_prep_rw_setup()
Open code io_prep_rw_setup() into its only caller, it doesn't provide any meaningful abstraction anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail
io_uring/rw: open code io_prep_rw_setup()
Open code io_prep_rw_setup() into its only caller, it doesn't provide any meaningful abstraction anymore.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/61ba72e2d46119db71f27ab908018e6a6cd6c064.1740425922.git.asml.silence@gmail.com [axboe: fold in 'ret' being unused fix] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
99fab047 |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: extract helper for iovec import
Split out a helper out of __io_import_rw_buffer() that handles vectored buffers. I'll need it for registered vectored buffers, but it also looks cleaner,
io_uring/rw: extract helper for iovec import
Split out a helper out of __io_import_rw_buffer() that handles vectored buffers. I'll need it for registered vectored buffers, but it also looks cleaner, especially with parameters being properly named.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/075470cfb24be38709d946815f35ec846d966f41.1740425922.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
74c94249 |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: rename io_import_iovec()
io_import_iovec() is not limited to iovecs but also imports buffers for normal reads and selected buffers, rename it for clarity.
Signed-off-by: Pavel Begunkov
io_uring/rw: rename io_import_iovec()
io_import_iovec() is not limited to iovecs but also imports buffers for normal reads and selected buffers, rename it for clarity.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/91cea59340b61a8f52dc7b8e720274577a25188c.1740425922.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
c72282dd |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: allocate async data in io_prep_rw()
rw always allocates async_data, so instead of doing that deeper in prep calls inside of io_prep_rw_setup(), be a bit more explicit and do that early
io_uring/rw: allocate async data in io_prep_rw()
rw always allocates async_data, so instead of doing that deeper in prep calls inside of io_prep_rw_setup(), be a bit more explicit and do that early on in io_prep_rw().
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/5ead621051bc3374d1e8d96f816454906a6afd71.1740425922.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
52524b28 |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: shrink io_iov_compat_buffer_select_prep
Compat performance is not important and simplicity is more appreciated. Let's not be smart about it and use simpler copy_from_user() instead of a
io_uring/rw: shrink io_iov_compat_buffer_select_prep
Compat performance is not important and simplicity is more appreciated. Let's not be smart about it and use simpler copy_from_user() instead of access + __get_user pair.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/b334a3a5040efa424ded58e4d8a6ef2554324266.1740400452.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
82d187d3 |
| 24-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: compile out compat param passing
Even when COMPAT is compiled out, we still have to pass ctx->compat to __import_iovec(). Replace the read with an indirection with a constant when the k
io_uring/rw: compile out compat param passing
Even when COMPAT is compiled out, we still have to pass ctx->compat to __import_iovec(). Replace the read with an indirection with a constant when the kernel doesn't support compat.
Signed-off-by: Pavel Begunkov <[email protected]> Reviewed-by: Anuj Gupta <[email protected]> Link: https://lore.kernel.org/r/2819df9c8533c36b46d7baccbb317a0ec89da6cd.1740400452.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc4 |
|
| #
4614de74 |
| 19-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: clean up mshot forced sync mode
Move code forcing synchronous execution of multishot read requests out a more generic __io_read().
Signed-off-by: Pavel Begunkov <[email protected]
io_uring/rw: clean up mshot forced sync mode
Move code forcing synchronous execution of multishot read requests out a more generic __io_read().
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/4ad7b928c776d1ad59addb9fff64ef2d1fc474d5.1739919038.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
74f3e875 |
| 19-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: move ki_complete init into prep
Initialise ki_complete during request prep stage, we'll depend on it not being reset during issue in the following patch.
Signed-off-by: Pavel Begunkov
io_uring/rw: move ki_complete init into prep
Initialise ki_complete during request prep stage, we'll depend on it not being reset during issue in the following patch.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/817624086bd5f0448b08c80623399919fda82f34.1739919038.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
4e43133c |
| 19-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: don't directly use ki_complete
We want to avoid checking ->ki_complete directly in the io_uring completion path. Fortunately we have only two callback the selection of which depend on t
io_uring/rw: don't directly use ki_complete
We want to avoid checking ->ki_complete directly in the io_uring completion path. Fortunately we have only two callback the selection of which depend on the ring constant flags, i.e. IOPOLL, so use that to infer the function.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/4eb4bdab8cbcf5bc87083f7047edc81e920ab83c.1739919038.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
67b0025d |
| 19-Feb-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: forbid multishot async reads
At the moment we can't sanely handle queuing an async request from a multishot context, so disable them. It shouldn't matter as pollable files / socekts don
io_uring/rw: forbid multishot async reads
At the moment we can't sanely handle queuing an async request from a multishot context, so disable them. It shouldn't matter as pollable files / socekts don't normally do async.
Patching it in __io_read() is not the cleanest way, but it's simpler than other options, so let's fix it there and clean up on top.
Cc: [email protected] Reported-by: chase xd <[email protected]> Fixes: fc68fcda04910 ("io_uring/rw: add support for IORING_OP_READ_MULTISHOT") Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/7d51732c125159d17db4fe16f51ec41b936973f8.1739919038.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
bcf8a029 |
| 17-Feb-2025 |
Caleb Sander Mateos <[email protected]> |
io_uring: introduce type alias for io_tw_state
In preparation for changing how io_tw_state is passed, introduce a type alias io_tw_token_t for struct io_tw_state *. This allows for changing the repr
io_uring: introduce type alias for io_tw_state
In preparation for changing how io_tw_state is passed, introduce a type alias io_tw_token_t for struct io_tw_state *. This allows for changing the representation in one place, without having to update the many functions that just forward their struct io_tw_state * argument.
Also add a comment to struct io_tw_state to explain its purpose.
Signed-off-by: Caleb Sander Mateos <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc3, v6.14-rc2, v6.14-rc1 |
|
| #
d1fdab8c |
| 28-Jan-2025 |
Pavel Begunkov <[email protected]> |
io_uring/rw: simplify io_rw_recycle()
Instead of freeing iovecs in case of IO_URING_F_UNLOCKED in io_rw_recycle(), leave it be and rely on the core io_uring code to call io_readv_writev_cleanup() la
io_uring/rw: simplify io_rw_recycle()
Instead of freeing iovecs in case of IO_URING_F_UNLOCKED in io_rw_recycle(), leave it be and rely on the core io_uring code to call io_readv_writev_cleanup() later. This way the iovec will get recycled and we can clean up io_rw_recycle() and kill io_rw_iovec_free().
Signed-off-by: Pavel Begunkov <[email protected]> Reviewed-by: Gabriel Krisman Bertazi <[email protected]> Link: https://lore.kernel.org/r/14f83b112eb40078bea18e15d77a4f99fc981a44.1738087204.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|