| dd4e429c | 18-Oct-2021 |
Ferruh Yigit <[email protected]> |
ethdev: move jumbo frame offload check to library
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support, and application should enable the jumbo frame offload support for it.
When
ethdev: move jumbo frame offload check to library
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support, and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger than RTE_ETHER_MTU is requested there are two options, either fail or enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers since setting a big MTU value already implies it, and this increases usability.
This patch moves this logic from drivers to the library, both to reduce the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <[email protected]> Reviewed-by: Andrew Rybchenko <[email protected]> Reviewed-by: Rosen Xu <[email protected]> Acked-by: Ajit Khaparde <[email protected]> Acked-by: Somnath Kotur <[email protected]> Acked-by: Konstantin Ananyev <[email protected]> Acked-by: Huisong Li <[email protected]>
show more ...
|
| 1bb4a528 | 18-Oct-2021 |
Ferruh Yigit <[email protected]> |
ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via 'uint32_t max_rx_p
ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via 'uint32_t max_rx_pkt_len' field of the config struct 'struct rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they store the set values in different variables which makes hard to figure out which one to use, also having two different method for a related functionality is confusing for the users.
Other issues causing confusion is: * maximum transmission unit (MTU) is payload of the Ethernet frame. And 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is Ethernet frame overhead, and this overhead may be different from device to device based on what device supports, like VLAN and QinQ. * 'max_rx_pkt_len' is only valid when application requested jumbo frame, which adds additional confusion and some APIs and PMDs already discards this documented behavior. * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result in same variable '(struct rte_eth_dev)->data->mtu'. For this 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user request and it should be used only within configure function and result should be stored to '(struct rte_eth_dev)->data->mtu'. After that point both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()' default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in relation to MTU and Rx buffer size. MTU is used to configure the device for physical Rx/Tx size limitation, Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size. PMDs compare MTU against Rx buffer size to decide enabling scattered Rx or not. If scattered Rx is not supported by device, MTU bigger than Rx buffer size should fail.
Signed-off-by: Ferruh Yigit <[email protected]> Acked-by: Ajit Khaparde <[email protected]> Acked-by: Somnath Kotur <[email protected]> Acked-by: Huisong Li <[email protected]> Acked-by: Andrew Rybchenko <[email protected]> Acked-by: Konstantin Ananyev <[email protected]> Acked-by: Rosen Xu <[email protected]> Acked-by: Hyong Youb Kim <[email protected]>
show more ...
|
| 6507e67a | 28-Sep-2021 |
Julien Meunier <[email protected]> |
net/ixgbe: fix queue release
On the vector implementation, during the tear-down, the mbufs not drained in the RxQ and TxQ are freed based on an algorithm which supposed that the number of descriptor
net/ixgbe: fix queue release
On the vector implementation, during the tear-down, the mbufs not drained in the RxQ and TxQ are freed based on an algorithm which supposed that the number of descriptors is a power of 2 (max_desc). Based on this hypothesis, this algorithm uses a bitmask in order to detect an index overflow during the iteration, and to restart the loop from 0.
However, there is no such power of 2 requirement in the ixgbe for the number of descriptors in the RxQ / TxQ. The only requirement is to have a number correctly aligned.
If a user requested to configure a number of descriptors which is not a power of 2, as a consequence, during the tear-down, it was possible to be in an infinite loop, and to never reach the exit loop condition.
By removing the bitmask and changing the loop method, we can avoid this issue, and allow the user to configure a RxQ / TxQ which is not a power of 2.
Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx") Cc: [email protected]
Signed-off-by: Julien Meunier <[email protected]> Acked-by: Haiyue Wang <[email protected]>
show more ...
|
| ff4e52ef | 11-Oct-2021 |
Viacheslav Galaktionov <[email protected]> |
ethdev: fix representor port ID search by name
The patch is required for all PMDs which do not provide representors info on the representor itself.
The function, rte_eth_representor_id_get(), is us
ethdev: fix representor port ID search by name
The patch is required for all PMDs which do not provide representors info on the representor itself.
The function, rte_eth_representor_id_get(), is used in eth_representor_cmp() which is required in ethdev class iterator to search ethdev port ID by name (representor case). Before the patch the function is called on the representor itself and tries to get representors info to match.
Search of port ID by name is used after hotplug to find out port ID of the just plugged device.
Getting a list of representors from a representor does not make sense. Instead, a backer device should be used.
To this end, extend the rte_eth_dev_data structure to include the port ID of the backing device for representors.
Signed-off-by: Viacheslav Galaktionov <[email protected]> Signed-off-by: Andrew Rybchenko <[email protected]> Acked-by: Haiyue Wang <[email protected]> Acked-by: Beilei Xing <[email protected]> Reviewed-by: Xueming Li <[email protected]> Acked-by: Viacheslav Ovsiienko <[email protected]>
show more ...
|
| a4ae7f51 | 22-Sep-2021 |
Yunjian Wang <[email protected]> |
net/ixgbe: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be freed. But the memzone will be not freed, when device setup ops like:
rte_eth_bond_sla
net/ixgbe: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be freed. But the memzone will be not freed, when device setup ops like:
rte_eth_bond_slave_remove -->__eth_bond_slave_remove_lock_free ---->slave_remove ------>rte_eth_dev_internal_reset -------->rte_eth_dev_rx_queue_config ---------->eth_dev_rx_queue_config ------------>ixgbe_dev_rx_queue_release rte_eth_dev_close -->ixgbe_dev_close ---->ixgbe_dev_free_queues ------>ixgbe_dev_rx_queue_release (not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone queue index will be lost. This will lead to a memory leak. So we should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues") Cc: [email protected]
Signed-off-by: Yunjian Wang <[email protected]> Acked-by: Haiyue Wang <[email protected]>
show more ...
|