|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1 |
|
| #
8fa7292f |
| 05-Apr-2025 |
Thomas Gleixner <[email protected]> |
treewide: Switch/rename to timer_delete[_sync]()
timer_delete[_sync]() replaces del_timer[_sync](). Convert the whole tree over and remove the historical wrapper inlines.
Conversion was done with c
treewide: Switch/rename to timer_delete[_sync]()
timer_delete[_sync]() replaces del_timer[_sync](). Convert the whole tree over and remove the historical wrapper inlines.
Conversion was done with coccinelle plus manual fixups where necessary.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2 |
|
| #
41b996ce |
| 04-Feb-2025 |
David Howells <[email protected]> |
rxrpc: Fix call state set to not include the SERVER_SECURING state
The RXRPC_CALL_SERVER_SECURING state doesn't really belong with the other states in the call's state set as the other states govern
rxrpc: Fix call state set to not include the SERVER_SECURING state
The RXRPC_CALL_SERVER_SECURING state doesn't really belong with the other states in the call's state set as the other states govern the call's Rx/Tx phase transition and govern when packets can and can't be received or transmitted. The "Securing" state doesn't actually govern the reception of packets and would need to be split depending on whether or not we've received the last packet yet (to mirror RECV_REQUEST/ACK_REQUEST).
The "Securing" state is more about whether or not we can start forwarding packets to the application as recvmsg will need to decode them and the decoding can't take place until the challenge/response exchange has completed.
Fix this by removing the RXRPC_CALL_SERVER_SECURING state from the state set and, instead, using a flag, RXRPC_CALL_CONN_CHALLENGING, to track whether or not we can queue the call for reception by recvmsg() or notify the kernel app that data is ready. In the event that we've already received all the packets, the connection event handler will poke the app layer in the appropriate manner.
Also there's a race whereby the app layer sees the last packet before rxrpc has managed to end the rx phase and change the state to one amenable to allowing a reply. Fix this by queuing the packet after calling rxrpc_end_rx_phase().
Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both") Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: Simon Horman <[email protected]> cc: [email protected] Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2 |
|
| #
7c482665 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Implement RACK/TLP to deal with transmission stalls [RFC8985]
When an rxrpc call is in its transmission phase and is sending a lot of packets, stalls occasionally occur that cause severe perf
rxrpc: Implement RACK/TLP to deal with transmission stalls [RFC8985]
When an rxrpc call is in its transmission phase and is sending a lot of packets, stalls occasionally occur that cause severe performance degradation (eg. increasing the transmission time for a 256MiB payload from 0.7s to 2.5s over a 10G link).
rxrpc already implements TCP-style congestion control [RFC5681] and this helps mitigate the effects, but occasionally we're missing a time event that deals with a missing ACK, leading to a stall until the RTO expires.
Fix this by implementing RACK/TLP in rxrpc.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
b40ef2b8 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Manage RTT per-call rather than per-peer
Manage the determination of RTT on a per-call (ie. per-RPC op) basis rather than on a per-peer basis, averaging across all calls going to that peer. T
rxrpc: Manage RTT per-call rather than per-peer
Manage the determination of RTT on a per-call (ie. per-RPC op) basis rather than on a per-peer basis, averaging across all calls going to that peer. The problem is that the RTT measurements from the initial packets on a call may be off because the server may do some setting up (such as getting a lock on a file) before accepting the rest of the data in the RPC and, further, the RTT may be affected by server-side file operations, for instance if a large amount of data is being written or read.
Note: When handling the FS.StoreData-type RPCs, for example, the server uses the userStatus field in the header of ACK packets as supplementary flow control to aid in managing this. AF_RXRPC does not yet support this, but it should be added.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
a2ea9a90 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Use irq-disabling spinlocks between app and I/O thread
Where a spinlock is used by both the application thread and the I/O thread, use irq-disabling locking so that an interrupt taken on the
rxrpc: Use irq-disabling spinlocks between app and I/O thread
Where a spinlock is used by both the application thread and the I/O thread, use irq-disabling locking so that an interrupt taken on the app thread doesn't also slow down the I/O thread.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
fe24a549 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Send jumbo DATA packets
Send jumbo DATA packets if the path-MTU probing using padded PING ACK packets shows up sufficient capacity to do so. This allows larger chunks of data to be sent with
rxrpc: Send jumbo DATA packets
Send jumbo DATA packets if the path-MTU probing using padded PING ACK packets shows up sufficient capacity to do so. This allows larger chunks of data to be sent without reducing the retryability as the subpackets in a jumbo packet can also be retransmitted individually.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
9b052c6b |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Use the new rxrpc_tx_queue struct to more efficiently process ACKs
With the change in the structure of the transmission buffer to store buffers in bunches of 32 or 64 (BITS_PER_LONG) we can p
rxrpc: Use the new rxrpc_tx_queue struct to more efficiently process ACKs
With the change in the structure of the transmission buffer to store buffers in bunches of 32 or 64 (BITS_PER_LONG) we can place sets of per-buffer flags into the rxrpc_tx_queue struct rather than storing them in rxrpc_tx_buf, thereby vastly increasing efficiency when assessing the SACK table in an ACK packet.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
b341a026 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Implement progressive transmission queue struct
We need to scan the buffers in the transmission queue occasionally when processing ACKs, but the transmission queue is currently a linked list
rxrpc: Implement progressive transmission queue struct
We need to scan the buffers in the transmission queue occasionally when processing ACKs, but the transmission queue is currently a linked list of transmission buffers which, when we eventually expand the Tx window to 8192 packets will be very slow to walk.
Instead, pull the fields we need to examine a lot (last sent time, retransmitted flag) into a new struct rxrpc_txqueue and make each one hold an array of 32 or 64 packets.
The transmission queue is then a list of these structs, each pointing to a contiguous set of packets. Scanning is then a lot faster as the flags and timestamps are concentrated in the CPU dcache.
The transmission timestamps are stored as a number of microseconds from a base ktime to reduce memory requirements. This should be fine provided we manage to transmit an entire buffer within an hour.
This will make implementing RACK-TLP [RFC8985] easier as it will be less costly to scan the transmission buffers.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
9e3cccd1 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Fix CPU time starvation in I/O thread
Starvation can happen in the rxrpc I/O thread because it goes back to the top of the I/O loop after it does any one thing without trying to give any othe
rxrpc: Fix CPU time starvation in I/O thread
Starvation can happen in the rxrpc I/O thread because it goes back to the top of the I/O loop after it does any one thing without trying to give any other connection or call CPU time. Also, because it processes one call packet at a time, it tries to do the retransmission loop after each ACK without checking to see if there are other ACKs already in the queue that can update the SACK state.
Fix this by:
(1) Add a received-packet queue on each call.
(2) Distribute packets from the master Rx queue to the individual call, conn and error queues and 'poking' calls to add them to the attend queue first thing in the I/O thread.
(3) Go through all the attention-seeking connections and calls before going back to the top of the I/O thread. Each queue is extracted as a whole and then gone through so that new additions to insert themselves into the queue.
(4) Make the call event handler go through all the packets currently on the call's rx_queue before transmitting and retransmitting DATA packets.
(5) Drop the skb argument from the call event handler as this is now replaced with the rx_queue. Instead, keep track of whether we received a packet or an ACK for the tests that used to rely on that.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
29e03ec7 |
| 04-Dec-2024 |
David Howells <[email protected]> |
rxrpc: Use umin() and umax() rather than min_t()/max_t() where possible
Use umin() and umax() rather than min_t()/max_t() where the type specified is an unsigned type.
Signed-off-by: David Howells
rxrpc: Use umin() and umax() rather than min_t()/max_t() where possible
Use umin() and umax() rather than min_t()/max_t() where the type specified is an unsigned type.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7 |
|
| #
ba4e1038 |
| 03-May-2024 |
David Howells <[email protected]> |
rxrpc: Fix congestion control algorithm
Make the following fixes to the congestion control algorithm:
(1) Don't vary the cwnd starting value by the size of RXRPC_TX_SMSS since that's currentl
rxrpc: Fix congestion control algorithm
Make the following fixes to the congestion control algorithm:
(1) Don't vary the cwnd starting value by the size of RXRPC_TX_SMSS since that's currently held constant - set to the size of a jumbo subpacket payload so that we can create jumbo packets on the fly. The current code invariably picks 3 as the starting value.
Further, the starting cwnd needs to be an even number because we ack every other packet, so set it to 4.
(2) Don't cut ssthresh when we see an ACK come from the peer with a receive window (rwind) less than ssthresh. ssthresh keeps track of characteristics of the connection whereas rwind may be reduced by the peer for any reason - and may be reduced to 0.
Fixes: 1fc4fa2ac93d ("rxrpc: Fix congestion management") Fixes: 0851115090a3 ("rxrpc: Reduce ssthresh to peer's receive window") Signed-off-by: David Howells <[email protected]> Suggested-by: Simon Wilkinson <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected] Reviewed-by: Jeffrey Altman <[email protected] <mailto:[email protected]>> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3 |
|
| #
153f90a0 |
| 30-Jan-2024 |
David Howells <[email protected]> |
rxrpc: Use ktimes for call timeout tracking and set the timer lazily
Track the call timeouts as ktimes rather than jiffies as the latter's granularity is too high and only set the timer at the end o
rxrpc: Use ktimes for call timeout tracking and set the timer lazily
Track the call timeouts as ktimes rather than jiffies as the latter's granularity is too high and only set the timer at the end of the event handling function.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: "David S. Miller" <[email protected]> cc: Eric Dumazet <[email protected]> cc: Jakub Kicinski <[email protected]> cc: Paolo Abeni <[email protected]> cc: [email protected] cc: [email protected]
show more ...
|
| #
41b7fa15 |
| 02-Feb-2024 |
David Howells <[email protected]> |
rxrpc: Fix counting of new acks and nacks
Fix the counting of new acks and nacks when parsing a packet - something that is used in congestion control.
As the code stands, it merely notes if there a
rxrpc: Fix counting of new acks and nacks
Fix the counting of new acks and nacks when parsing a packet - something that is used in congestion control.
As the code stands, it merely notes if there are any nacks whereas what we really should do is compare the previous SACK table to the new one, assuming we get two successive ACK packets with nacks in them. However, we really don't want to do that if we can avoid it as the tables might not correspond directly as one may be shifted from the other - something that will only get harder to deal with once extended ACK tables come into full use (with a capacity of up to 8192).
Instead, count the number of nacks shifted out of the old SACK, the number of nacks retained in the portion still active and the number of new acks and nacks in the new table then calculate what we need.
Note this ends up a bit of an estimate as the Rx protocol allows acks to be withdrawn by the receiver and packets requested to be retransmitted.
Fixes: d57a3a151660 ("rxrpc: Save last ACK's SACK table rather than marking txbufs") Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: "David S. Miller" <[email protected]> cc: Eric Dumazet <[email protected]> cc: Jakub Kicinski <[email protected]> cc: Paolo Abeni <[email protected]> cc: [email protected] cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc2, v6.8-rc1, v6.7 |
|
| #
4fc68c4c |
| 05-Jan-2024 |
David Howells <[email protected]> |
rxrpc: Fix skbuff cleanup of call's recvmsg_queue and rx_oos_queue
Fix rxrpc_cleanup_ring() to use rxrpc_purge_queue() rather than skb_queue_purge() so that the count of outstanding skbuffs is corre
rxrpc: Fix skbuff cleanup of call's recvmsg_queue and rx_oos_queue
Fix rxrpc_cleanup_ring() to use rxrpc_purge_queue() rather than skb_queue_purge() so that the count of outstanding skbuffs is correctly updated when a failed call is cleaned up.
Without this rmmod may hang waiting for rxrpc_n_rx_skbs to become zero.
Fixes: 5d7edbc9231e ("rxrpc: Get rid of the Rx ring") Reported-by: Marc Dionne <[email protected]> Signed-off-by: David Howells <[email protected]> cc: "David S. Miller" <[email protected]> cc: Eric Dumazet <[email protected]> cc: Jakub Kicinski <[email protected]> cc: Paolo Abeni <[email protected]> cc: [email protected] cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7 |
|
| #
72904d7b |
| 19-Oct-2023 |
David Howells <[email protected]> |
rxrpc, afs: Allow afs to pin rxrpc_peer objects
Change rxrpc's API such that:
(1) A new function, rxrpc_kernel_lookup_peer(), is provided to look up an rxrpc_peer record for a remote address
rxrpc, afs: Allow afs to pin rxrpc_peer objects
Change rxrpc's API such that:
(1) A new function, rxrpc_kernel_lookup_peer(), is provided to look up an rxrpc_peer record for a remote address and a corresponding function, rxrpc_kernel_put_peer(), is provided to dispose of it again.
(2) When setting up a call, the rxrpc_peer object used during a call is now passed in rather than being set up by rxrpc_connect_call(). For afs, this meenat passing it to rxrpc_kernel_begin_call() rather than the full address (the service ID then has to be passed in as a separate parameter).
(3) A new function, rxrpc_kernel_remote_addr(), is added so that afs can get a pointer to the transport address for display purposed, and another, rxrpc_kernel_remote_srx(), to gain a pointer to the full rxrpc address.
(4) The function to retrieve the RTT from a call, rxrpc_kernel_get_srtt(), is then altered to take a peer. This now returns the RTT or -1 if there are insufficient samples.
(5) Rename rxrpc_kernel_get_peer() to rxrpc_kernel_call_get_peer().
(6) Provide a new function, rxrpc_kernel_get_peer(), to get a ref on a peer the caller already has.
This allows the afs filesystem to pin the rxrpc_peer records that it is using, allowing faster lookups and pointer comparisons rather than comparing sockaddr_rxrpc contents. It also makes it easier to get hold of the RTT. The following changes are made to afs:
(1) The addr_list struct's addrs[] elements now hold a peer struct pointer and a service ID rather than a sockaddr_rxrpc.
(2) When displaying the transport address, rxrpc_kernel_remote_addr() is used.
(3) The port arg is removed from afs_alloc_addrlist() since it's always overridden.
(4) afs_merge_fs_addr4() and afs_merge_fs_addr6() do peer lookup and may now return an error that must be handled.
(5) afs_find_server() now takes a peer pointer to specify the address.
(6) afs_find_server(), afs_compare_fs_alists() and afs_merge_fs_addr[46]{} now do peer pointer comparison rather than address comparison.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
|
Revision tags: v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1 |
|
| #
db099c62 |
| 28-Apr-2023 |
David Howells <[email protected]> |
rxrpc: Fix timeout of a call that hasn't yet been granted a channel
afs_make_call() calls rxrpc_kernel_begin_call() to begin a call (which may get stalled in the background waiting for a connection
rxrpc: Fix timeout of a call that hasn't yet been granted a channel
afs_make_call() calls rxrpc_kernel_begin_call() to begin a call (which may get stalled in the background waiting for a connection to become available); it then calls rxrpc_kernel_set_max_life() to set the timeouts - but that starts the call timer so the call timer might then expire before we get a connection assigned - leading to the following oops if the call stalled:
BUG: kernel NULL pointer dereference, address: 0000000000000000 ... CPU: 1 PID: 5111 Comm: krxrpcio/0 Not tainted 6.3.0-rc7-build3+ #701 RIP: 0010:rxrpc_alloc_txbuf+0xc0/0x157 ... Call Trace: <TASK> rxrpc_send_ACK+0x50/0x13b rxrpc_input_call_event+0x16a/0x67d rxrpc_io_thread+0x1b6/0x45f ? _raw_spin_unlock_irqrestore+0x1f/0x35 ? rxrpc_input_packet+0x519/0x519 kthread+0xe7/0xef ? kthread_complete_and_exit+0x1b/0x1b ret_from_fork+0x22/0x30
Fix this by noting the timeouts in struct rxrpc_call when the call is created. The timer will be started when the first packet is transmitted.
It shouldn't be possible to trigger this directly from userspace through AF_RXRPC as sendmsg() will return EBUSY if the call is in the waiting-for-conn state if it dropped out of the wait due to a signal.
Fixes: 9d35d880e0e4 ("rxrpc: Move client call connection to the I/O thread") Reported-by: Marc Dionne <[email protected]> Signed-off-by: David Howells <[email protected]> cc: "David S. Miller" <[email protected]> cc: Eric Dumazet <[email protected]> cc: Jakub Kicinski <[email protected]> cc: Paolo Abeni <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5 |
|
| #
48380368 |
| 29-Mar-2023 |
Peter Zijlstra <[email protected]> |
Change DEFINE_SEMAPHORE() to take a number argument
Fundamentally semaphores are a counted primitive, but DEFINE_SEMAPHORE() does not expose this and explicitly creates a binary semaphore.
Change D
Change DEFINE_SEMAPHORE() to take a number argument
Fundamentally semaphores are a counted primitive, but DEFINE_SEMAPHORE() does not expose this and explicitly creates a binary semaphore.
Change DEFINE_SEMAPHORE() to take a number argument and use that in the few places that open-coded it using __SEMAPHORE_INITIALIZER().
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> [mcgrof: add some tribal knowledge about why some folks prefer binary sempahores over mutexes] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Davidlohr Bueso <[email protected]> Signed-off-by: Luis Chamberlain <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8 |
|
| #
a33395ab |
| 29-Nov-2022 |
David Howells <[email protected]> |
rxrpc: Fix overwaking on call poking
If an rxrpc call is given a poke, it will get woken up unconditionally, even if there's already a poke pending (for which there will have been a wake) or if the
rxrpc: Fix overwaking on call poking
If an rxrpc call is given a poke, it will get woken up unconditionally, even if there's already a poke pending (for which there will have been a wake) or if the call refcount has gone to 0.
Fix this by only waking the call if it is still referenced and if it doesn't already have a poke pending.
Fixes: 15f661dc95da ("rxrpc: Implement a mechanism to send an event notification to a call") Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
|
Revision tags: v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2 |
|
| #
5bbf9533 |
| 17-Oct-2022 |
David Howells <[email protected]> |
rxrpc: De-atomic call->ackr_window and call->ackr_nr_unacked
call->ackr_window doesn't need to be atomic as ACK generation and ACK transmission are now done in the same thread, so drop the atomic64
rxrpc: De-atomic call->ackr_window and call->ackr_nr_unacked
call->ackr_window doesn't need to be atomic as ACK generation and ACK transmission are now done in the same thread, so drop the atomic64 handling and split it into two separate members.
Similarly, call->ackr_nr_unacked doesn't need to be atomic now either.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
|
Revision tags: v6.1-rc1 |
|
| #
223f5901 |
| 12-Oct-2022 |
David Howells <[email protected]> |
rxrpc: Convert call->recvmsg_lock to a spinlock
Convert call->recvmsg_lock to a spinlock as it's only ever write-locked.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <marc.dio
rxrpc: Convert call->recvmsg_lock to a spinlock
Convert call->recvmsg_lock to a spinlock as it's only ever write-locked.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
| #
01644a1f |
| 11-Jan-2023 |
David Howells <[email protected]> |
rxrpc: Fix wrong error return in rxrpc_connect_call()
Fix rxrpc_connect_call() to return -ENOMEM rather than 0 if it fails to look up a peer.
This generated a smatch warning: net/rxrpc/call
rxrpc: Fix wrong error return in rxrpc_connect_call()
Fix rxrpc_connect_call() to return -ENOMEM rather than 0 if it fails to look up a peer.
This generated a smatch warning: net/rxrpc/call_object.c:303 rxrpc_connect_call() warn: missing error code 'ret'
I think this also fixes a syzbot-found bug:
rxrpc: Assertion failed - 1(0x1) == 11(0xb) is false ------------[ cut here ]------------ kernel BUG at net/rxrpc/call_object.c:645!
where the call being put is in the wrong state - as would be the case if we failed to clear up correctly after the error in rxrpc_connect_call().
Fixes: 9d35d880e0e4 ("rxrpc: Move client call connection to the I/O thread") Reported-by: kernel test robot <[email protected]> Reported-by: Dan Carpenter <[email protected]> Reported-and-tested-by: [email protected] Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected]/ Reviewed-by: Alexander Duyck <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
| #
9d35d880 |
| 19-Oct-2022 |
David Howells <[email protected]> |
rxrpc: Move client call connection to the I/O thread
Move the connection setup of client calls to the I/O thread so that a whole load of locking and barrierage can be eliminated. This necessitates
rxrpc: Move client call connection to the I/O thread
Move the connection setup of client calls to the I/O thread so that a whole load of locking and barrierage can be eliminated. This necessitates the app thread waiting for connection to complete before it can begin encrypting data.
This also completes the fix for a race that exists between call connection and call disconnection whereby the data transmission code adds the call to the peer error distribution list after the call has been disconnected (say by the rxrpc socket getting closed).
The fix is to complete the process of moving call connection, data transmission and call disconnection into the I/O thread and thus forcibly serialising them.
Note that the issue may predate the overhaul to an I/O thread model that were included in the merge window for v6.2, but the timing is very much changed by the change given below.
Fixes: cf37b5987508 ("rxrpc: Move DATA transmission into call processor work item") Reported-by: [email protected] Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
| #
96b4059f |
| 27-Oct-2022 |
David Howells <[email protected]> |
rxrpc: Remove call->state_lock
All the setters of call->state are now in the I/O thread and thus the state lock is now unnecessary.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionn
rxrpc: Remove call->state_lock
All the setters of call->state are now in the I/O thread and thus the state lock is now unnecessary.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
| #
1bab27af |
| 21-Oct-2022 |
David Howells <[email protected]> |
rxrpc: Set up a connection bundle from a call, not rxrpc_conn_parameters
Use the information now stored in struct rxrpc_call to configure the connection bundle and thence the connection, rather than
rxrpc: Set up a connection bundle from a call, not rxrpc_conn_parameters
Use the information now stored in struct rxrpc_call to configure the connection bundle and thence the connection, rather than using the rxrpc_conn_parameters struct.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|
| #
57af281e |
| 06-Oct-2022 |
David Howells <[email protected]> |
rxrpc: Tidy up abort generation infrastructure
Tidy up the abort generation infrastructure in the following ways:
(1) Create an enum and string mapping table to list the reasons an abort migh
rxrpc: Tidy up abort generation infrastructure
Tidy up the abort generation infrastructure in the following ways:
(1) Create an enum and string mapping table to list the reasons an abort might be generated in tracing.
(2) Replace the 3-char string with the values from (1) in the places that use that to log the abort source. This gets rid of a memcpy() in the tracepoint.
(3) Subsume the rxrpc_rx_eproto tracepoint with the rxrpc_abort tracepoint and use values from (1) to indicate the trace reason.
(4) Always make a call to an abort function at the point of the abort rather than stashing the values into variables and using goto to get to a place where it reported. The C optimiser will collapse the calls together as appropriate. The abort functions return a value that can be returned directly if appropriate.
Note that this extends into afs also at the points where that generates an abort. To aid with this, the afs sources need to #define RXRPC_TRACE_ONLY_DEFINE_ENUMS before including the rxrpc tracing header because they don't have access to the rxrpc internal structures that some of the tracepoints make use of.
Signed-off-by: David Howells <[email protected]> cc: Marc Dionne <[email protected]> cc: [email protected]
show more ...
|