| /dpdk/doc/guides/prog_guide/ |
| H A D | switch_representation.rst | 23 layer 2 (L2) traffic (such as OVS) need to steer traffic themselves 491 | | | traffic : | | | traffic 500 | | | traffic : | ^ | | traffic 553 | | | traffic | v 567 | | <-- traffic | 588 - Matches **F** in `traffic steering`_. 595 - Targets **F** in `traffic steering`_. 632 - Matches **A** in `traffic steering`_. 641 - Targets **A** in `traffic steering`_. 804 traffic. [all …]
|
| H A D | qos_framework.rst | 134 Each traffic class is the representation of a different traffic type with specific loss rate, 392 …These fields are: port, subport, traffic class and queue within traffic class, and are typically s… 443 …ength and the available credits (of current pipe, pipe traffic class, subport and subport traffic … 480 which requires credit updates based on time (for example, subport and pipe traffic shaping, traffic… 551 subport S traffic class TC, pipe P, pipe P traffic class TC. 747 The traffic classes at the pipe and subport levels are not traffic shaped, 749 The upper limit for the traffic classes at the subport and 932 allocated for the same traffic class at the parent subport level. 935 traffic class is solely the result of pipe and 987 which is typically used for best effort traffic, [all …]
|
| H A D | power_man.rst | 128 to improved handling of bursts of traffic. 131 frequency (including turbo) to do best effort for intensive traffic. This gives 132 us more flexible and balanced traffic awareness over the standard l3fwd-power 153 * MED: the frequency is used to process modest traffic workload. 155 * HIGH: the frequency is used to process busy traffic workload. 163 In this phase, the user must ensure that no traffic can enter the 167 displayed, and normal mode resumes, and traffic can be allowed into 209 there's new traffic. Support for this scheme may not be available on all 219 scale the core frequency up/down depending on traffic volume. 233 monitoring multiple Ethernet Rx queues for traffic will be supported.
|
| H A D | traffic_metering_and_policing.rst | 53 based on the previous traffic history reflected in the current state of the 54 MTR object, according to the specific traffic metering algorithm. The 55 traffic metering algorithm can typically work in color aware mode, in which 79 the traffic meter and policing library. 82 traffic meter and policing library.
|
| /dpdk/drivers/event/dlb2/ |
| H A D | dlb2_xstats.c | 109 val += port->stats.traffic.rx_ok; in dlb2_device_traffic_stat_get() 112 val += port->stats.traffic.rx_drop; in dlb2_device_traffic_stat_get() 121 val += port->stats.traffic.tx_ok; in dlb2_device_traffic_stat_get() 124 val += port->stats.traffic.total_polls; in dlb2_device_traffic_stat_get() 127 val += port->stats.traffic.zero_polls; in dlb2_device_traffic_stat_get() 1146 p->stats.traffic.rx_ok); in dlb2_eventdev_dump() 1149 p->stats.traffic.rx_drop); in dlb2_eventdev_dump() 1152 p->stats.traffic.rx_interrupt_wait); in dlb2_eventdev_dump() 1158 p->stats.traffic.tx_ok); in dlb2_eventdev_dump() 1161 p->stats.traffic.total_polls); in dlb2_eventdev_dump() [all …]
|
| /dpdk/examples/ipsec-secgw/ |
| H A D | ipsec-secgw.c | 788 traffic->ipsec.saptr, traffic->ipsec.num); in process_pkts_inbound() 852 &traffic->ip4, &traffic->ipsec, in process_pkts_outbound() 856 &traffic->ip6, &traffic->ipsec, in process_pkts_outbound() 862 traffic->ipsec.res, traffic->ipsec.num, in process_pkts_outbound() 878 traffic->ipsec.saptr, traffic->ipsec.num); in process_pkts_outbound() 908 traffic->ipsec.saptr, traffic->ipsec.num); in process_pkts_inbound_nosp() 927 traffic->ipsec.pkts[n] = traffic->ip4.pkts[i]; in process_pkts_outbound_nosp() 932 traffic->ipsec.pkts[n] = traffic->ip6.pkts[i]; in process_pkts_outbound_nosp() 943 traffic->ipsec.res, traffic->ipsec.num, in process_pkts_outbound_nosp() 952 traffic->ip4.pkts[i] = traffic->ipsec.pkts[i]; in process_pkts_outbound_nosp() [all …]
|
| /dpdk/doc/guides/howto/ |
| H A D | flow_bifurcation.rst | 8 to split traffic between Linux user space and kernel space. Since it is a 12 movement during the traffic split. This can yield better performance with 15 The Flow Bifurcation splits the incoming data traffic to user space 17 the Linux kernel stack). It can direct some traffic, for example data plane 18 traffic, to DPDK, while directing some other traffic, for example control 19 plane traffic, to the traditional Linux networking stack. 26 with physical functions (PF). The network adapter will direct traffic to a 35 In this way the Linux networking stack can receive specific traffic through 36 the kernel driver while a DPDK application can receive specific traffic
|
| H A D | packet_capture_framework.rst | 10 for those who want to monitor traffic on DPDK-controlled devices. 39 to capture traffic from the first available DPDK port. 98 #. Send traffic to dpdk_port0 from traffic generator.
|
| H A D | lm_bond_virtio_sriov.rst | 31 which is also connected to the traffic generator. 33 The switch is configured to broadcast traffic on all the NIC ports. 160 VF traffic is seen at P1 and P2. 176 No VF traffic is seen at P0 and P2, VF MAC address still present. 194 No VF traffic is seen at P0 and P2. 354 VF traffic is seen at P1 (VF) and P2 (Bonded device). 369 VF traffic is seen at P1 (VF) and P2 (Bonded device). 443 # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM. 546 # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM. 624 The Intel switch is used to connect the traffic generator to the
|
| /dpdk/doc/guides/sample_app_ug/ |
| H A D | packet_ordering.rst | 15 * RX core (main core) receives traffic from the NIC ports and feeds Worker 16 cores with traffic through SW queues. 22 * TX Core (worker core) receives traffic from Worker cores through software queues, 54 When setting more than 1 port, traffic would be forwarded in pairs. 55 For example, if we enable 4 ports, traffic from port 0 to 1 and from 1 to 0, 59 of traffic, which should help evaluate reordering performance impact.
|
| H A D | vmdq_dcb_forwarding.rst | 8 The application performs L2 forwarding using VMDQ and DCB to divide the incoming traffic into queue… 9 The traffic splitting is performed in hardware by the VMDQ and DCB features of the Intel® 82599 and… 15 uses VMDQ and DCB for traffic partitioning. 17 The VMDQ and DCB filters work on MAC and VLAN traffic to divide the traffic into input queues on th… 19 VMDQ filters split the traffic into 16 or 32 groups based on the Destination MAC and VLAN ID. 22 All traffic is read from a single incoming port (port 0) and output on port 1, without any processi… 23 With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each threa… 27 The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 … 106 (also referred to as traffic classes) within each pool. With Intel® 82599 NIC,
|
| H A D | qos_metering.rst | 67 :end-before: >8 End of traffic metering configuration. 69 To simplify debugging (for example, by using the traffic generator RX side MAC address based packet… 72 The traffic meter parameters are configured in the application source code with following default v… 77 :end-before: >8 End of traffic meter parameters are configured in the application. 79 Assuming the input traffic is generated at line rate and all packets are 64 bytes Ethernet frames (… 80 and green, the expected output traffic should be marked as shown in the following table:
|
| H A D | vmdq_forwarding.rst | 8 The application performs L2 forwarding using VMDq to divide the incoming traffic into queues. 9 The traffic splitting is performed in hardware by the VMDq feature of the Intel® 82599 and X710/XL7… 15 uses VMDq for traffic partitioning. 20 All traffic is read from a single incoming port and output on another port, without any processing … 21 With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each threa… 25 The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 …
|
| H A D | qos_scheduler.rst | 121 The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters 145 The information is displayed in a table separating it in different traffic classes. 168 * qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class. 172 * qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic clas… 208 * A traffic class is the representation of a different traffic type with a specific loss rate, 213 The traffic flows that need to be configured are application dependent. 236 …| Queue | High Priority TC: 1, | Queue of lowest priority traffic | De…
|
| H A D | bbdev_app.rst | 93 To allow the bbdev sample app to do the loopback, an influx of traffic is required. 94 This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and 95 it will print the transmitted along with the looped-back traffic on Rx ports. 96 Executing the command below will generate traffic on the two allowed ethernet
|
| H A D | test_pipeline.rst | 15 * Core A ("RX core") receives traffic from the NIC ports and feeds core B with traffic through SW… 19 Core B receives traffic from core A through software queues, 23 * Core C ("TX core") receives traffic from core B through software queues and sends it to the NIC… 229 the same input traffic can be used to hit all table entries with uniform distribution, 231 The profile for input traffic is TCP/IPv4 packets with:
|
| H A D | ipsec_secgw.rst | 44 The Path for IPsec Inbound traffic is: 55 The Path for the IPsec Outbound traffic is: 434 * The traffic direction 440 * *in*: inbound traffic 441 * *out*: outbound traffic 538 * The traffic direction 544 * *in*: inbound traffic 545 * *out*: outbound traffic 864 * The traffic output port id 929 * The traffic input port id [all …]
|
| H A D | ip_pipeline.rst | 28 …te pipeline ports: memory pools, links (i.e. network interfaces), SW queues, traffic managers, etc. 248 Add traffic manager subport profile :: 257 Add traffic manager pipe profile :: 268 Create traffic manager port :: 278 Configure traffic manager subport :: 284 Configure traffic manager pipe :: 536 Update the dscp table for meter or traffic manager action for specific
|
| /dpdk/doc/guides/nics/ |
| H A D | softnic.rst | 31 such as: memory pools, SW queues, traffic manager, action profiles, pipelines, 73 #. ``tm_n_queues``: number of traffic manager's scheduler queues. The traffic manager 76 #. ``tm_qsize0``: size of scheduler queue 0 (traffic class 0) of the pipes/subscribers. 79 #. ``tm_qsize1``: size of scheduler queue 1 (traffic class 1) of the pipes/subscribers. 82 #. ``tm_qsize2``: size of scheduler queue 2 (traffic class 2) of the pipes/subscribers. 85 #. ``tm_qsize3``: size of scheduler queue 3 (traffic class 3) of the pipes/subscribers. 274 SoftNIC PMD implements ethdev traffic management APIs ``rte_tm.h`` that 275 allow building and committing traffic manager hierarchy, configuring hierarchy 277 library. Furthermore, APIs for run-time update to the traffic manager hierarchy 280 SoftNIC PMD also implements ethdev traffic metering and policing APIs [all …]
|
| H A D | cnxk.rst | 170 traffic on this port should be higig2 traffic only. Supported switch header 302 1. Inbound encrypted traffic received by probed ipsec inline device while 303 plain traffic post decryption is received by ethdev. 305 2. Both Inbound encrypted traffic and plain traffic post decryption are 316 With the above configuration, inbound encrypted traffic from both the ports 336 set with this custom mask, inbound encrypted traffic from all ports with 356 between traffic from each SDP interface. The channel and mask combination 442 - Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet 444 single rte security session to accept traffic from multiple ports. 466 for inbound SA with min SPI of 6 for traffic aggregated on inline device. [all …]
|
| H A D | sfc_efx.rst | 195 - PORT_REPRESENTOR (cannot repeat; conflicts with other traffic source items) 197 - REPRESENTED_PORT (cannot repeat; conflicts with other traffic source items) 199 - PORT_ID (cannot repeat; conflicts with other traffic source items) 201 - PHY_PORT (cannot repeat; conflicts with other traffic source items) 203 - PF (cannot repeat; conflicts with other traffic source items) 205 - VF (cannot repeat; conflicts with other traffic source items) 215 traffic class, hop limit) 395 to configure switching inside NIC to deliver traffic to physical (PF) and 397 infrastructure for VFs, and traffic goes to/from VF by default in accordance 399 In switchdev mode VF traffic goes via port representor (if any) on PF, and [all …]
|
| /dpdk/doc/guides/vdpadevs/ |
| H A D | mlx5.rst | 50 automatically adjusts its delays to the coming traffic rate. 56 arms the CQ in order to get an interrupt event in the next traffic burst. 71 A nonzero value defines the traffic off time, in polling cycle time units, 72 that moves the driver to no-traffic mode. In this mode the polling is stopped 73 and interrupts are configured to the device in order to notify traffic for the
|
| /dpdk/examples/qos_sched/ |
| H A D | profile.cfg | 8 ; - Each of the 13 traffic classes has rate set to 100% of port rate 11 ; - Each of the 13 traffic classes has rate set to 100% of pipe rate 12 ; - Within lowest priority traffic class (best-effort), the byte-level 13 ; WRR weights for the 4 queues of best effort traffic class are set 78 ; RED params per traffic class and color (Green / Yellow / Red)
|
| /dpdk/doc/guides/testpmd_app_ug/ |
| H A D | testpmd_funcs.rst | 2335 The traffic class should be 4 or 8. 2736 show port traffic management capability 2917 show port traffic management capability 2993 Add port traffic management shared shaper 3043 Add port traffic management WRED profile 3189 Commit port traffic management hierarchy 3220 Set port traffic management mark IP dscp 3238 Set port traffic management mark IP ecn 3980 - ``tc {unsigned}``: traffic class. 4661 Ingress traffic on port [...] [all …]
|
| /dpdk/doc/guides/tools/ |
| H A D | dumpcap.rst | 8 network traffic dump tool. 14 Without any options set, it will use DPDK to capture traffic
|