|
Revision tags: v6.15 |
|
| #
3a089881 |
| 22-May-2025 |
Jens Axboe <[email protected]> |
io_uring/net: only retry recv bundle for a full transfer
If a shorter than assumed transfer was seen, a partial buffer will have been filled. For that case it isn't sane to attempt to fill more into
io_uring/net: only retry recv bundle for a full transfer
If a shorter than assumed transfer was seen, a partial buffer will have been filled. For that case it isn't sane to attempt to fill more into the bundle before posting a completion, as that will cause a gap in the received data.
Check if the iterator has hit zero and only allow to continue a bundle operation if that is the case.
Also ensure that for putting finished buffers, only the current transfer is accounted. Otherwise too many buffers may be put for a short transfer.
Link: https://github.com/axboe/liburing/issues/1409 Cc: [email protected] Fixes: 7c71a0af81ba ("io_uring/net: improve recv bundles") Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1 |
|
| #
81ed1801 |
| 31-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: avoid import_ubuf for regvec send
With registered buffers we set up iterators in helpers like io_import_fixed(), and there is no need for a import_ubuf() before that. It was fine as we
io_uring/net: avoid import_ubuf for regvec send
With registered buffers we set up iterators in helpers like io_import_fixed(), and there is no need for a import_ubuf() before that. It was fine as we used real pointers for offset calculation, but that's not the case anymore since introduction of ublk kernel buffers.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/9b2de1a50844f848f62c8de609b494971033a6b9.1743437358.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
fbe1a30c |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: import zc ubuf earlier
io_send_setup() already sets up the iterator for IORING_OP_SEND_ZC, we don't need repeating that at issue time. Move it all together with mem accounting at prep
io_uring/net: import zc ubuf earlier
io_send_setup() already sets up the iterator for IORING_OP_SEND_ZC, we don't need repeating that at issue time. Move it all together with mem accounting at prep time, which is more consistent with how the non-zc version does that.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/eb54f007c493ad9f4ca89aa8e715baf30d83fb88.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
ad3f6cc4 |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: set sg_from_iter in advance
In preparation to the next patch, set ->sg_from_iter callback at request prep time.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lo
io_uring/net: set sg_from_iter in advance
In preparation to the next patch, set ->sg_from_iter callback at request prep time.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/5fe2972701df3bacdb3d760bce195fa640bee201.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
49dbce56 |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: clusterise send vs msghdr branches
We have multiple branches at prep for send vs sendmsg handling, put them together so that the variant handling is more localised.
Signed-off-by: Pav
io_uring/net: clusterise send vs msghdr branches
We have multiple branches at prep for send vs sendmsg handling, put them together so that the variant handling is more localised.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/33abf666d9ded74cba4da2f0d9fe58e88520dffe.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
63b16e4f |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: unify sendmsg setup with zc
io_sendmsg_zc_setup() duplicates parts of io_sendmsg_setup(), and the only difference between them is that the former support vectored registered buffers wi
io_uring/net: unify sendmsg setup with zc
io_sendmsg_zc_setup() duplicates parts of io_sendmsg_setup(), and the only difference between them is that the former support vectored registered buffers with nothing zerocopy specific. Merge them together, we want regular sendmsg to eventually support fixed buffers either way.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/7e5ec40f9dc93355399dc6fa0cbc8b31f0b20ac5.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
c55e2845 |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: combine sendzc flags writes
Save an instruction / trip to the cache and assign some of sendzc flags together.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore
io_uring/net: combine sendzc flags writes
Save an instruction / trip to the cache and assign some of sendzc flags together.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/c519d6f406776c3be3ef988a8339a88e45d1ffd9.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
5f364117 |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: open code io_net_vec_assign()
Get rid of io_net_vec_assign() by open coding it into its only caller.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.o
io_uring/net: open code io_net_vec_assign()
Get rid of io_net_vec_assign() by open coding it into its only caller.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/19191c34b5cfe1161f7eeefa6e785418ea9ad56d.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
a20b8631 |
| 28-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: open code io_sendmsg_copy_hdr()
io_sendmsg_setup() is trivial and io_sendmsg_copy_hdr() doesn't add any good abstraction, open code one into another.
Signed-off-by: Pavel Begunkov <as
io_uring/net: open code io_sendmsg_copy_hdr()
io_sendmsg_setup() is trivial and io_sendmsg_copy_hdr() doesn't add any good abstraction, open code one into another.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/565318ce585665e88053663eeee5178d2c15692f.1743202294.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
04491732 |
| 27-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: account memory for zc sendmsg
Account pinned pages for IORING_OP_SENDMSG_ZC, just as we for IORING_OP_SEND_ZC and net/ does for MSG_ZEROCOPY.
Fixes: 493108d95f146 ("io_uring/net: zero
io_uring/net: account memory for zc sendmsg
Account pinned pages for IORING_OP_SENDMSG_ZC, just as we for IORING_OP_SEND_ZC and net/ does for MSG_ZEROCOPY.
Fixes: 493108d95f146 ("io_uring/net: zerocopy sendmsg") Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/4f00f67ca6ac8e8ed62343ae92b5816b1e0c9c4b.1743086313.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
6889ae1b |
| 27-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: fix io_req_post_cqe abuse by send bundle
[ 114.987980][ T5313] WARNING: CPU: 6 PID: 5313 at io_uring/io_uring.c:872 io_req_post_cqe+0x12e/0x4f0 [ 114.991597][ T5313] RIP: 0010:io_req
io_uring/net: fix io_req_post_cqe abuse by send bundle
[ 114.987980][ T5313] WARNING: CPU: 6 PID: 5313 at io_uring/io_uring.c:872 io_req_post_cqe+0x12e/0x4f0 [ 114.991597][ T5313] RIP: 0010:io_req_post_cqe+0x12e/0x4f0 [ 115.001880][ T5313] Call Trace: [ 115.002222][ T5313] <TASK> [ 115.007813][ T5313] io_send+0x4fe/0x10f0 [ 115.009317][ T5313] io_issue_sqe+0x1a6/0x1740 [ 115.012094][ T5313] io_wq_submit_work+0x38b/0xed0 [ 115.013223][ T5313] io_worker_handle_work+0x62a/0x1600 [ 115.013876][ T5313] io_wq_worker+0x34f/0xdf0
As the comment states, io_req_post_cqe() should only be used by multishot requests, i.e. REQ_F_APOLL_MULTISHOT, which bundled sends are not. Add a flag signifying whether a request wants to post multiple CQEs. Eventually REQ_F_APOLL_MULTISHOT should imply the new flag, but that's left out for simplicity.
Cc: [email protected] Fixes: a05d1f625c7aa ("io_uring/net: support bundles for send") Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/8b611dbb54d1cd47a88681f5d38c84d0c02bc563.1743067183.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
73b6dacb |
| 25-Mar-2025 |
Caleb Sander Mateos <[email protected]> |
io_uring/net: use REQ_F_IMPORT_BUFFER for send_zc
Instead of a bool field in struct io_sr_msg, use REQ_F_IMPORT_BUFFER to track whether io_send_zc() has already imported the buffer. This flag alread
io_uring/net: use REQ_F_IMPORT_BUFFER for send_zc
Instead of a bool field in struct io_sr_msg, use REQ_F_IMPORT_BUFFER to track whether io_send_zc() has already imported the buffer. This flag already serves a similar purpose for sendmsg_zc and {read,write}v_fixed.
Signed-off-by: Caleb Sander Mateos <[email protected]> Suggested-by: Pavel Begunkov <[email protected]> Reviewed-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.14 |
|
| #
67c007d6 |
| 22-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: fix sendzc double notif flush
refcount_t: underflow; use-after-free. WARNING: CPU: 0 PID: 5823 at lib/refcount.c:28 refcount_warn_saturate+0x15a/0x1d0 lib/refcount.c:28 RIP: 0010:refco
io_uring/net: fix sendzc double notif flush
refcount_t: underflow; use-after-free. WARNING: CPU: 0 PID: 5823 at lib/refcount.c:28 refcount_warn_saturate+0x15a/0x1d0 lib/refcount.c:28 RIP: 0010:refcount_warn_saturate+0x15a/0x1d0 lib/refcount.c:28 Call Trace: <TASK> io_notif_flush io_uring/notif.h:40 [inline] io_send_zc_cleanup+0x121/0x170 io_uring/net.c:1222 io_clean_op+0x58c/0x9a0 io_uring/io_uring.c:406 io_free_batch_list io_uring/io_uring.c:1429 [inline] __io_submit_flush_completions+0xc16/0xd20 io_uring/io_uring.c:1470 io_submit_flush_completions io_uring/io_uring.h:159 [inline]
Before the blamed commit, sendzc relied on io_req_msg_cleanup() to clear REQ_F_NEED_CLEANUP, so after the following snippet the request will never hit the core io_uring cleanup path.
io_notif_flush(); io_req_msg_cleanup();
The easiest fix is to null the notification. io_send_zc_cleanup() can still be called after, but it's tolerated.
Reported-by: [email protected] Tested-by: [email protected] Fixes: cc34d8330e036 ("io_uring/net: don't clear REQ_F_NEED_CLEANUP unconditionally") Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/e1306007458b8891c88c4f20c966a17595f766b0.1742643795.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
8e3100fc |
| 21-Mar-2025 |
Caleb Sander Mateos <[email protected]> |
io_uring/net: only import send_zc buffer once
io_send_zc() guards its call to io_send_zc_import() with if (!done_io) in an attempt to avoid calling it redundantly on the same req. However, if the in
io_uring/net: only import send_zc buffer once
io_send_zc() guards its call to io_send_zc_import() with if (!done_io) in an attempt to avoid calling it redundantly on the same req. However, if the initial non-blocking issue returns -EAGAIN, done_io will stay 0. This causes the subsequent issue to unnecessarily re-import the buffer.
Add an explicit flag "imported" to io_sr_msg to track if its buffer has already been imported. Clear the flag in io_send_zc_prep(). Call io_send_zc_import() and set the flag in io_send_zc() if it is unset.
Signed-off-by: Caleb Sander Mateos <[email protected]> Fixes: 54cdcca05abd ("io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
cc34d833 |
| 20-Mar-2025 |
Jens Axboe <[email protected]> |
io_uring/net: don't clear REQ_F_NEED_CLEANUP unconditionally
io_req_msg_cleanup() relies on the fact that io_netmsg_recycle() will always fully recycle, but that may not be the case if the msg cache
io_uring/net: don't clear REQ_F_NEED_CLEANUP unconditionally
io_req_msg_cleanup() relies on the fact that io_netmsg_recycle() will always fully recycle, but that may not be the case if the msg cache was already full. To ensure that normal cleanup always gets run, let io_netmsg_recycle() deal with clearing the relevant cleanup flags, as it knows exactly when that should be done.
Cc: [email protected] Reported-by: David Wei <[email protected]> Fixes: 75191341785e ("io_uring/net: add iovec recycling") Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc7, v6.14-rc6 |
|
| #
146acfd0 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: rely on io_prep_reg_vec for iovec placement
All vectored reg buffer users should use io_import_reg_vec() for iovec imports, since iovec placement is the function's responsibility and calle
io_uring: rely on io_prep_reg_vec for iovec placement
All vectored reg buffer users should use io_import_reg_vec() for iovec imports, since iovec placement is the function's responsibility and callers shouldn't know much about it, drop the offset parameter from io_prep_reg_vec() and calculate it inside.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/08ed87ca4bbc06724373b6ce06f36b703fe60c4e.1741457480.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
d291fb65 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: introduce io_prep_reg_iovec()
iovecs that are turned into registered buffers are imported in a special way with an offset, so that later we can do an in place translation. Add a helper fun
io_uring: introduce io_prep_reg_iovec()
iovecs that are turned into registered buffers are imported in a special way with an offset, so that later we can do an in place translation. Add a helper function taking care of it.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/7de2ecb9ed5efc3c5cf320232236966da5ad4ccc.1741457480.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
5027d024 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: unify STOP_MULTISHOT with IOU_OK
IOU_OK means that the request ownership is now handed back to core io_uring and it has to complete it using the result provided in req->cqe. Same is true f
io_uring: unify STOP_MULTISHOT with IOU_OK
IOU_OK means that the request ownership is now handed back to core io_uring and it has to complete it using the result provided in req->cqe. Same is true for multishot and IOU_STOP_MULTISHOT.
Rename it into IOU_COMPLETE to avoid confusion and use for both modes.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/e6a5b2edb0eb9558acb1c8f1db38ac45fee95491.1741453534.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
7a9dcb05 |
| 08-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: return -EAGAIN to continue multishot
Multishot errors can be mapped 1:1 to normal errors, but there are not identical. It leads to a peculiar situation where all multishot requests has to
io_uring: return -EAGAIN to continue multishot
Multishot errors can be mapped 1:1 to normal errors, but there are not identical. It leads to a peculiar situation where all multishot requests has to check in what context they're run and return different codes.
Unify them starting with EAGAIN / IOU_ISSUE_SKIP_COMPLETE(EIOCBQUEUED) pair, which mean that core io_uring still owns the request and it should be retried. In case of multishot it's naturally just continues to poll, otherwise it might poll, use iowq or do any other kind of allowed blocking. Introduce IOU_RETRY aliased to -EAGAIN for that.
Apart from obvious upsides, multishot can now also check for misuse of IOU_ISSUE_SKIP_COMPLETE.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/da117b79ce72ecc3ab488c744e29fae9ba54e23b.1741453534.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
0396ad37 |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring: cap cached iovec/bvec size
Bvecs can be large, put an arbitrary limit on the max vector size it can cache.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel
io_uring: cap cached iovec/bvec size
Bvecs can be large, put an arbitrary limit on the max vector size it can cache.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/823055fa6628daa24bbc9cd77c2da87e9a1e1e32.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
23371eac |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: implement vectored reg bufs for zctx
Add support for vectored registered buffers for send zc.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/e4
io_uring/net: implement vectored reg bufs for zctx
Add support for vectored registered buffers for send zc.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/e484052875f862d2dca99f0f8c04407c1d51a1c1.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
be7052a4 |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: convert to struct iou_vec
Convert net.c to use struct iou_vec.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/6437b57dabed44eca708c02e390529c7e
io_uring/net: convert to struct iou_vec
Convert net.c to use struct iou_vec.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/6437b57dabed44eca708c02e390529c7ed211c78.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
9fcb349f |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: pull vec alloc out of msghdr import
I'll need more control over iovec management, move io_net_import_vec() out of io_msg_copy_hdr().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.
io_uring/net: pull vec alloc out of msghdr import
I'll need more control over iovec management, move io_net_import_vec() out of io_msg_copy_hdr().
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/9600ea6300f620e65d39da481c22605ddc898850.1741362889.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
| #
17523a82 |
| 07-Mar-2025 |
Pavel Begunkov <[email protected]> |
io_uring/net: combine msghdr copy
Call the compat version from inside of io_msg_copy_hdr() and don't duplicate it in callers.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lo
io_uring/net: combine msghdr copy
Call the compat version from inside of io_msg_copy_hdr() and don't duplicate it in callers.
Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/25795660f7b31f9273911c99f495d9c2b169ecda.1741362889.git.asml.silence@gmail.com [axboe: fixup msg pointer vs variable braino in io_msg_copy_hdr()] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc5 |
|
| #
4afc332b |
| 27-Feb-2025 |
Arnd Bergmann <[email protected]> |
io_uring/net: fix build warning for !CONFIG_COMPAT
A code rework resulted in an uninitialized return code when COMPAT mode is disabled:
io_uring/net.c:722:6: error: variable 'ret' is used uninitial
io_uring/net: fix build warning for !CONFIG_COMPAT
A code rework resulted in an uninitialized return code when COMPAT mode is disabled:
io_uring/net.c:722:6: error: variable 'ret' is used uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-uninitialized] 722 | if (io_is_compat(req->ctx)) { | ^~~~~~~~~~~~~~~~~~~~~~ io_uring/net.c:736:15: note: uninitialized use occurs here 736 | if (unlikely(ret)) | ^~~
Since io_is_compat() turns into a compile-time 'false', the #ifdef here is completely unnecessary, and removing it avoids the warning.
Fixes: 51e158d40589 ("io_uring/net: unify *mshot_prep calls with compat") Signed-off-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|