|
Revision tags: v22.03, v22.03-rc4, v22.03-rc3, v22.03-rc2 |
|
| #
13cd6d5c |
| 23-Feb-2022 |
Alexander Kozyrev <[email protected]> |
ethdev: bring in async indirect actions operations
Queue-based flow rules management mechanism is suitable not only for flow rules creation/destruction, but also for speeding up other types of Flow
ethdev: bring in async indirect actions operations
Queue-based flow rules management mechanism is suitable not only for flow rules creation/destruction, but also for speeding up other types of Flow API management. Indirect action object operations may be executed asynchronously as well. Provide async versions for all indirect action operations, namely: rte_flow_async_action_handle_create, rte_flow_async_action_handle_destroy and rte_flow_async_action_handle_update.
Signed-off-by: Alexander Kozyrev <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
197e820c |
| 23-Feb-2022 |
Alexander Kozyrev <[email protected]> |
ethdev: bring in async queue-based flow rules operations
A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous
ethdev: bring in async queue-based flow rules operations
A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue.
The rte_flow_async_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_pull() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_async_destroy() function enqueues a flow destruction to the requested queue.
Signed-off-by: Alexander Kozyrev <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
f076bcfb |
| 23-Feb-2022 |
Alexander Kozyrev <[email protected]> |
ethdev: add flow item/action templates
Treating every single flow rule as a completely independent and separate entity negatively impacts the flow rules insertion rate. Oftentimes in an application,
ethdev: add flow item/action templates
Treating every single flow rule as a completely independent and separate entity negatively impacts the flow rules insertion rate. Oftentimes in an application, many flow rules share a common structure (the same item mask and/or action list) so they can be grouped and classified together. This knowledge may be used as a source of optimization by a PMD/HW.
The pattern template defines common matching fields (the item mask) without values. The actions template holds a list of action types that will be used together in the same rule. The specific values for items and actions will be given only during the rule creation.
A table combines pattern and actions templates along with shared flow rule attributes (group ID, priority and traffic direction). This way a PMD/HW can prepare all the resources needed for efficient flow rules creation in the datapath. To avoid any hiccups due to memory reallocation, the maximum number of flow rules is defined at the table creation time.
The flow rule creation is done by selecting a table, a pattern template and an actions template (which are bound to the table), and setting unique values for the items and actions.
Signed-off-by: Alexander Kozyrev <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
4ff58b73 |
| 23-Feb-2022 |
Alexander Kozyrev <[email protected]> |
ethdev: introduce flow engine configuration
The flow rules creation/destruction at a large scale incurs a performance penalty and may negatively impact the packet processing when used as part of the
ethdev: introduce flow engine configuration
The flow rules creation/destruction at a large scale incurs a performance penalty and may negatively impact the packet processing when used as part of the datapath logic. This is mainly because software/hardware resources are allocated and prepared during the flow rule creation.
In order to optimize the insertion rate, PMD may use some hints provided by the application at the initialization phase. The rte_flow_configure() function allows to pre-allocate all the needed resources beforehand. These resources can be used at a later stage without costly allocations. Every PMD may use only the subset of hints and ignore unused ones or fail in case the requested configuration is not supported.
The rte_flow_info_get() is available to retrieve the information about supported pre-configurable resources. Both these functions must be called before any other usage of the flow API engine.
Signed-off-by: Alexander Kozyrev <[email protected]> Acked-by: Ori Kam <[email protected]> Reviewed-by: Andrew Rybchenko <[email protected]>
show more ...
|
|
Revision tags: v22.03-rc1 |
|
| #
f61490bd |
| 11-Feb-2022 |
Sean Zhang <[email protected]> |
ethdev: support GRE optional fields
Add flow pattern items and header format for matching optional fields (checksum/key/sequence) in GRE header. And the flags in gre item should be correspondingly s
ethdev: support GRE optional fields
Add flow pattern items and header format for matching optional fields (checksum/key/sequence) in GRE header. And the flags in gre item should be correspondingly set with the new added items.
Signed-off-by: Sean Zhang <[email protected]> Acked-by: Ori Kam <[email protected]>
show more ...
|
|
Revision tags: v21.11, v21.11-rc4, v21.11-rc3, v21.11-rc2 |
|
| #
de39080b |
| 04-Nov-2021 |
Gregory Etelson <[email protected]> |
ethdev: fix variable length flow elements support
RTE flow API defines two flow elements types - common and PMD private. Common RTE flow types are defined in rte_flow.h while PMD private types exist
ethdev: fix variable length flow elements support
RTE flow API defines two flow elements types - common and PMD private. Common RTE flow types are defined in rte_flow.h while PMD private types exists inside specific PMD only. Application can create a flow rule with PMD private items or actions. RTE flow API restricts private PMD types to negative values.
Current implementation tried to use negative PMD private item type value as index in the rte_flow_desc_item[] array.
The patch allows access to rte_flow_desc_item[] and rte_flow_desc_action[] arrays to non-private PMD types only.
Fixes: 6cf72047332b ("ethdev: support flow elements with variable length")
Signed-off-by: Gregory Etelson <[email protected]> Reviewed-by: Ferruh Yigit <[email protected]>
show more ...
|
|
Revision tags: v21.11-rc1 |
|
| #
3a929df1 |
| 21-Oct-2021 |
Jie Wang <[email protected]> |
ethdev: support L2TPv2 and PPP procotol
Added flow pattern items and header formats of L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <[email protected]> Signed-off-by: Jie Wang <[email protected]>
ethdev: support L2TPv2 and PPP procotol
Added flow pattern items and header formats of L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <[email protected]> Signed-off-by: Jie Wang <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]> Reviewed-by: Ferruh Yigit <[email protected]>
show more ...
|
| #
e1823e08 |
| 20-Oct-2021 |
Thomas Monjalon <[email protected]> |
ethdev: replace bit shifts with macros
The macros RTE_BIT32 and RTE_BIT64 are used to replace bit shifts. The macro UINT64C is also used to replace remaining occurrences of ULL.
The bit shifts of E
ethdev: replace bit shifts with macros
The macros RTE_BIT32 and RTE_BIT64 are used to replace bit shifts. The macro UINT64C is also used to replace remaining occurrences of ULL.
The bit shifts of ETH_RSS_LEVEL_* are kept for aesthetic reason.
The API of rte_mtr and rte_tm is using enums for 64-bit variables. As they are enums, unsigned bit cannot be used.
Signed-off-by: Thomas Monjalon <[email protected]> Reviewed-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
dc4d860e |
| 20-Oct-2021 |
Viacheslav Ovsiienko <[email protected]> |
ethdev: introduce configurable flexible item
1. Introduction and Retrospective
Nowadays the networks are evolving fast and wide, the network structures are getting more and more complicated, the ne
ethdev: introduce configurable flexible item
1. Introduction and Retrospective
Nowadays the networks are evolving fast and wide, the network structures are getting more and more complicated, the new application areas are emerging. To address these challenges the new network protocols are continuously being developed, considered by technical communities, adopted by industry and, eventually implemented in hardware and software. The DPDK framework follows the common trends and if we bother to glance at the RTE Flow API header we see the multiple new items were introduced during the last years since the initial release.
The new protocol adoption and implementation process is not straightforward and takes time, the new protocol passes development, consideration, adoption, and implementation phases. The industry tries to mitigate and address the forthcoming network protocols, for example, many hardware vendors are implementing flexible and configurable network protocol parsers. As DPDK developers, could we anticipate the near future in the same fashion and introduce the similar flexibility in RTE Flow API?
Let's check what we already have merged in our project, and we see the nice raw item (rte_flow_item_raw). At the first glance, it looks superior and we can try to implement a flow matching on the header of some relatively new tunnel protocol, say on the GENEVE header with variable length options. And, under further consideration, we run into the raw item limitations:
- only fixed size network header can be represented - the entire network header pattern of fixed format (header field offsets are fixed) must be provided - the search for patterns is not robust (the wrong matches might be triggered), and actually is not supported by existing PMDs - no explicitly specified relations with preceding and following items - no tunnel hint support
As the result, implementing the support for tunnel protocols like aforementioned GENEVE with variable extra protocol option with flow raw item becomes very complicated and would require multiple flows and multiple raw items chained in the same flow (by the way, there is no support found for chained raw items in implemented drivers).
This RFC introduces the dedicated flex item (rte_flow_item_flex) to handle matches with existing and new network protocol headers in a unified fashion.
2. Flex Item Life Cycle
Let's assume there are the requirements to support the new network protocol with RTE Flows. What is given within protocol specification:
- header format - header length, (can be variable, depending on options) - potential presence of extra options following or included in the header the header - the relations with preceding protocols. For example, the GENEVE follows UDP, eCPRI can follow either UDP or L2 header - the relations with following protocols. For example, the next layer after tunnel header can be L2 or L3 - whether the new protocol is a tunnel and the header is a splitting point between outer and inner layers
The supposed way to operate with flex item:
- application defines the header structures according to protocol specification
- application calls rte_flow_flex_item_create() with desired configuration according to the protocol specification, it creates the flex item object over specified ethernet device and prepares PMD and underlying hardware to handle flex item. On item creation call PMD backing the specified ethernet device returns the opaque handle identifying the object has been created
- application uses the rte_flow_item_flex with obtained handle in the flows, the values/masks to match with fields in the header are specified in the flex item per flow as for regular items (except that pattern buffer combines all fields)
- flows with flex items match with packets in a regular fashion, the values and masks for the new protocol header match are taken from the flex items in the flows
- application destroys flows with flex items
- application calls rte_flow_flex_item_release() as part of ethernet device API and destroys the flex item object in PMD and releases the engaged hardware resources
3. Flex Item Structure
The flex item structure is intended to be used as part of the flow pattern like regular RTE flow items and provides the mask and value to match with fields of the protocol item was configured for.
struct rte_flow_item_flex { void *handle; uint32_t length; const uint8_t* pattern; };
The handle is some opaque object maintained on per device basis by underlying driver.
The protocol header fields are considered as bit fields, all offsets and widths are expressed in bits. The pattern is the buffer containing the bit concatenation of all the fields presented at item configuration time, in the same order and same amount. If byte boundary alignment is needed an application can use a dummy type field, this is just some kind of gap filler.
The length field specifies the pattern buffer length in bytes and is needed to allow rte_flow_copy() operations. The approach of multiple pattern pointers and lengths (per field) was considered and found clumsy - it seems to be much suitable for the application to maintain the single structure within the single pattern buffer.
4. Flex Item Configuration
The flex item configuration consists of the following parts:
- header field descriptors: - next header - next protocol - sample to match - input link descriptors - output link descriptors
The field descriptors tell the driver and hardware what data should be extracted from the packet and then control the packet handling in the flow engine. Besides this, sample fields can be presented to match with patterns in the flows. Each field is a bit pattern. It has width, offset from the header beginning, mode of offset calculation, and offset related parameters.
The next header field is special, no data are actually taken from the packet, but its offset is used as a pointer to the next header in the packet, in other words the next header offset specifies the size of the header being parsed by flex item.
There is one more special field - next protocol, it specifies where the next protocol identifier is contained and packet data sampled from this field will be used to determine the next protocol header type to continue packet parsing. The next protocol field is like eth_type field in MAC2, or proto field in IPv4/v6 headers.
The sample fields are used to represent the data be sampled from the packet and then matched with established flows.
There are several methods supposed to calculate field offset in runtime depending on configuration and packet content:
- FIELD_MODE_FIXED - fixed offset. The bit offset from header beginning is permanent and defined by field_base configuration parameter.
- FIELD_MODE_OFFSET - the field bit offset is extracted from other header field (indirect offset field). The resulting field offset to match is calculated from as:
field_base + (*offset_base & offset_mask) << offset_shift
This mode is useful to sample some extra options following the main header with field containing main header length. Also, this mode can be used to calculate offset to the next protocol header, for example - IPv4 header contains the 4-bit field with IPv4 header length expressed in dwords. One more example - this mode would allow us to skip GENEVE header variable length options.
- FIELD_MODE_BITMASK - the field bit offset is extracted from other header field (indirect offset field), the latter is considered as bitmask containing some number of one bits, the resulting field offset to match is calculated as:
field_base + bitcount(*offset_base & offset_mask) << offset_shift
This mode would be useful to skip the GTP header and its extra options with specified flags.
- FIELD_MODE_DUMMY - dummy field, optionally used for byte boundary alignment in pattern. Pattern mask and data are ignored in the match. All configuration parameters besides field size and offset are ignored.
Note: "*" - means the indirect field offset is calculated and actual data are extracted from the packet by this offset (like data are fetched by pointer *p from memory).
The offset mode list can be extended by vendors according to hardware supported options.
The input link configuration section tells the driver after what protocols and at what conditions the flex item can follow. Input link specified the preceding header pattern, for example for GENEVE it can be UDP item specifying match on destination port with value 6081. The flex item can follow multiple header types and multiple input links should be specified. At flow creation time the item with one of the input link types should precede the flex item and driver will select the correct flex item settings, depending on the actual flow pattern.
The output link configuration section tells the driver how to continue packet parsing after the flex item protocol. If multiple protocols can follow the flex item header the flex item should contain the field with the next protocol identifier and the parsing will be continued depending on the data contained in this field in the actual packet.
The flex item fields can participate in RSS hash calculation, the dedicated flag is present in the field description to specify what fields should be provided for hashing.
5. Flex Item Chaining
If there are multiple protocols supposed to be supported with flex items in chained fashion - two or more flex items within the same flow and these ones might be neighbors in the pattern, it means the flex items are mutual referencing. In this case, the item that occurred first should be created with empty output link list or with the list including existing items, and then the second flex item should be created referencing the first flex item as input arc, drivers should adjust the item configuration.
Also, the hardware resources used by flex items to handle the packet can be limited. If there are multiple flex items that are supposed to be used within the same flow it would be nice to provide some hint for the driver that these two or more flex items are intended for simultaneous usage. The fields of items should be assigned with hint indices and these indices from two or more flex items supposed to be provided within the same flow should be the same as well. In other words, the field hint index specifies the group of fields that can be matched simultaneously within a single flow. If hint indices are specified, the driver will try to engage not overlapping hardware resources and provide independent handling of the field groups with unique indices. If the hint index is zero the driver assigns resources on its own.
6. Example of New Protocol Handling
Let's suppose we have the requirements to handle the new tunnel protocol that follows UDP header with destination port 0xFADE and is followed by MAC header. Let the new protocol header format be like this:
struct new_protocol_header { rte_be32 header_length; /* length in dwords, including options */ rte_be32 specific0; /* some protocol data, no intention */ rte_be32 specific1; /* to match in flows on these fields */ rte_be32 crucial; /* data of interest, match is needed */ rte_be32 options[0]; /* optional protocol data, variable length */ };
The supposed flex item configuration:
struct rte_flow_item_flex_field field0 = { .field_mode = FIELD_MODE_DUMMY, /* Affects match pattern only */ .field_size = 96, /* three dwords from the beginning */ }; struct rte_flow_item_flex_field field1 = { .field_mode = FIELD_MODE_FIXED, .field_size = 32, /* Field size is one dword */ .field_base = 96, /* Skip three dwords from the beginning */ }; struct rte_flow_item_udp spec0 = { .hdr = { .dst_port = RTE_BE16(0xFADE), } }; struct rte_flow_item_udp mask0 = { .hdr = { .dst_port = RTE_BE16(0xFFFF), } }; struct rte_flow_item_flex_link link0 = { .item = { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &spec0, .mask = &mask0, };
struct rte_flow_item_flex_conf conf = { .next_header = { .tunnel = FLEX_TUNNEL_MODE_SINGLE, .field_mode = FIELD_MODE_OFFSET, .field_base = 0, .offset_base = 0, .offset_mask = 0xFFFFFFFF, .offset_shift = 2 /* Expressed in dwords, shift left by 2 */ }, .sample = { &field0, &field1, }, .nb_samples = 2, .input_link[0] = &link0, .nb_inputs = 1 };
Let's suppose we have created the flex item successfully, and PMD returned the handle 0x123456789A. We can use the following item pattern to match the crucial field in the packet with value 0x00112233:
struct new_protocol_header spec_pattern = { .crucial = RTE_BE32(0x00112233), }; struct new_protocol_header mask_pattern = { .crucial = RTE_BE32(0xFFFFFFFF), }; struct rte_flow_item_flex spec_flex = { .handle = 0x123456789A .length = sizeiof(struct new_protocol_header), .pattern = &spec_pattern, }; struct rte_flow_item_flex mask_flex = { .length = sizeof(struct new_protocol_header), .pattern = &mask_pattern, }; struct rte_flow_item item_to_match = { .type = RTE_FLOW_ITEM_TYPE_FLEX, .spec = &spec_flex, .mask = &mask_flex, };
Signed-off-by: Viacheslav Ovsiienko <[email protected]> Acked-by: Ori Kam <[email protected]>
show more ...
|
| #
6cf72047 |
| 20-Oct-2021 |
Gregory Etelson <[email protected]> |
ethdev: support flow elements with variable length
Flow API provides RAW item type for packet patterns of variable length. The RAW item structure has fixed size members that describe the variable pa
ethdev: support flow elements with variable length
Flow API provides RAW item type for packet patterns of variable length. The RAW item structure has fixed size members that describe the variable pattern length and methods to process it.
There is the new Flow items with variable lengths coming - flex item. In order to handle this item (and potentially other new ones with variable pattern length) in flow copy and conversion routines the helper function is introduced.
Signed-off-by: Gregory Etelson <[email protected]> Reviewed-by: Viacheslav Ovsiienko <[email protected]> Acked-by: Ori Kam <[email protected]>
show more ...
|
| #
1179f05c |
| 14-Oct-2021 |
Ivan Malov <[email protected]> |
ethdev: query proxy port to manage transfer flows
Not all DPDK ports in a given switching domain may have the privilege to manage "transfer" flows. Add an API to find a port with sufficient privileg
ethdev: query proxy port to manage transfer flows
Not all DPDK ports in a given switching domain may have the privilege to manage "transfer" flows. Add an API to find a port with sufficient privileges by any port in the domain.
Signed-off-by: Ivan Malov <[email protected]> Reviewed-by: Andrew Rybchenko <[email protected]> Acked-by: Ori Kam <[email protected]>
show more ...
|
| #
88caad25 |
| 13-Oct-2021 |
Ivan Malov <[email protected]> |
ethdev: add represented port action to flow API
For use in "transfer" flows. Supposed to send matching traffic to the entity represented by the given ethdev, at embedded switch level. Such an entity
ethdev: add represented port action to flow API
For use in "transfer" flows. Supposed to send matching traffic to the entity represented by the given ethdev, at embedded switch level. Such an entity can be a network (via a network port), a guest machine (via a VF) or another ethdev in the same application.
Signed-off-by: Ivan Malov <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
8edb6bc0 |
| 13-Oct-2021 |
Ivan Malov <[email protected]> |
ethdev: add port representor action to flow API
For use in "transfer" flows. Supposed to send matching traffic to the given ethdev (to the application), at embedded switch level.
Signed-off-by: Iva
ethdev: add port representor action to flow API
For use in "transfer" flows. Supposed to send matching traffic to the given ethdev (to the application), at embedded switch level.
Signed-off-by: Ivan Malov <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
49863ae2 |
| 13-Oct-2021 |
Ivan Malov <[email protected]> |
ethdev: add represented port item to flow API
For use in "transfer" flows. Supposed to match traffic entering the embedded switch from the entity represented by the given ethdev. Such an entity can
ethdev: add represented port item to flow API
For use in "transfer" flows. Supposed to match traffic entering the embedded switch from the entity represented by the given ethdev. Such an entity can be a network (via a network port), a guest machine (via a VF) or another ethdev in the same application.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
| #
081e42da |
| 13-Oct-2021 |
Ivan Malov <[email protected]> |
ethdev: add port representor item to flow API
For use in "transfer" flows. Supposed to match traffic entering the embedded switch from the given ethdev.
Must not be combined with direction attribut
ethdev: add port representor item to flow API
For use in "transfer" flows. Supposed to match traffic entering the embedded switch from the given ethdev.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Andrew Rybchenko <[email protected]>
show more ...
|
|
Revision tags: v21.08, v21.08-rc4, v21.08-rc3, v21.08-rc2, v21.08-rc1, v21.05, v21.05-rc4, v21.05-rc3, v21.05-rc2 |
|
| #
1d0b9c7d |
| 29-Apr-2021 |
Gregory Etelson <[email protected]> |
ethdev: fix integrity flow item
Add integrity item definition to the rte_flow_desc_item array. The new entry allows to build RTE flow item from a data stored in rte_flow_item_integrity type.
Fixes:
ethdev: fix integrity flow item
Add integrity item definition to the rte_flow_desc_item array. The new entry allows to build RTE flow item from a data stored in rte_flow_item_integrity type.
Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")
Signed-off-by: Gregory Etelson <[email protected]> Acked-by: Viacheslav Ovsiienko <[email protected]> Acked-by: Ajit Khaparde <[email protected]> Acked-by: Ori Kam <[email protected]>
show more ...
|
|
Revision tags: v21.05-rc1 |
|
| #
9847fd12 |
| 19-Apr-2021 |
Bing Zhao <[email protected]> |
ethdev: introduce conntrack flow action and item
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading like a TCP connection, HW
ethdev: introduce conntrack flow action and item
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading like a TCP connection, HW module will help provide the ability of a full offloading w/o SW participation after the connection was established.
The basic usage is that in the first flow rule the application should add the conntrack action and jump to the next flow table. In the following flow rule(s) of the next table, the application should use the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack action context correctly, the information of packets from both directions are required.
The conntrack action should be created on one ethdev port and supply the peer ethdev port as a parameter to the action. After context created, it could only be used between these two ethdev ports (dual-port mode) or a single port. The application should modify the action via the API "rte_action_handle_update" only when before using it to create a flow rule with conntrack for the opposite direction. This will help the driver to recognize the direction of the flow to be created, especially in the single-port mode, in which case the traffic from both directions will go through the same ethdev port if the application works as an "forwarding engine" but not an end point. There is no need to call the update interface if the subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface, about the current packets information and connection status. The fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested to re-inject the packets in order to make sure the conntrack module works correctly without missing any packet. Only the valid packets should pass the conntrack, packets with invalid TCP information, like out of window, or with invalid header, like malformed, should not pass.
Naming and definition: https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/ netfilter/nf_conntrack_tcp.h https://elixir.bootlin.com/linux/latest/source/net/netfilter/ nf_conntrack_proto_tcp.c
Other reference: https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Thomas Monjalon <[email protected]>
show more ...
|
| #
4b61b877 |
| 19-Apr-2021 |
Bing Zhao <[email protected]> |
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared actions, like RSS, count. The shared action should be created before using it inside a flow.
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared actions, like RSS, count. The shared action should be created before using it inside a flow. These shared actions sometimes are not really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace the current shared functions rte_flow_shared_action_*.
There are two types of flow actions: 1. the direct (normal) actions that could be created and stored within a flow rule. Such action is tied to its flow rule and cannot be reused. 2. the indirect action, in the past, named shared_action. It is created from a direct actioni, like count or rss, and then used in the flow rules with an object handle. The PMD will take care of the retrieve from indirect action to the direct action when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule, just via the action object handle. For example, when querying or resetting a counter, it could be done out of any flow using this counter, but only the handle of the counter action object is required. The indirect action object could be shared by different flows or used by a single flow, depending on the direct action type and the real-life requirements. The handle of an indirect action object is opaque and defined in each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new "rte_flow_action_handle*", the testpmd application code and command line interfaces also need to be updated to do the adaption. The testpmd application user guide is also updated. All the "shared action" related parts are replaced with "indirect action" to have a correct explanation.
The parameter of "update" interface is also changed. A general pointer will replace the rte_flow_action struct pointer due to the facts: 1. Some action may not support fields updating. In the example of a counter, the only "update" supported should be the reset. So passing a rte_flow_action struct pointer is meaningless and there is even no such corresponding action struct. What's more, if more than one operations should be supported, for some other action, such pointer parameter may not meet the need. 2. Some action may need conditional or partial update, the current parameter will not provide the ability to indicate which part(s) to update. For different types of indirect action objects, the pointer could either be the same of rte_flow_action* struct - in order not to break the current driver implementation, or some wrapper structures with bits as masks to indicate which part to be updated, depending on real needs of the corresponding direct action. For different direct actions, the structures of indirect action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to break the ABI. All the implementations are changed by using RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new "rte_flow_action_handle*" and the "update" interface's 3rd input parameter is changed to generic pointer, the mlx5 PMD that uses these APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <[email protected]> Acked-by: Andrey Vesnovaty <[email protected]> Acked-by: Ori Kam <[email protected]> Acked-by: Ajit Khaparde <[email protected]> Acked-by: Thomas Monjalon <[email protected]>
show more ...
|
| #
99a2dd95 |
| 20-Apr-2021 |
Bruce Richardson <[email protected]> |
lib: remove librte_ prefix from directory names
There is no reason for the DPDK libraries to all have 'librte_' prefix on the directory names. This prefix makes the directory names longer and also m
lib: remove librte_ prefix from directory names
There is no reason for the DPDK libraries to all have 'librte_' prefix on the directory names. This prefix makes the directory names longer and also makes it awkward to add features referring to individual libraries in the build - should the lib names be specified with or without the prefix. Therefore, we can just remove the library prefix and use the library's unique name as the directory name, i.e. 'eal' rather than 'librte_eal'
Signed-off-by: Bruce Richardson <[email protected]>
show more ...
|