| /f-stack/dpdk/doc/guides/prog_guide/ |
| H A D | generic_receive_offload_lib.rst | 9 small packets into larger ones, GRO enables applications to process 10 fewer large packets directly, thus reducing the number of packets to 14 reassemble packets. 21 example, TCP/IPv4 GRO processes TCP/IPv4 packets. 24 table structure to reassemble packets. We assign input packets to the 34 packets as well as VxLAN packets which contain an outer IPv4 header and an 44 applications and supports to merge a large number of packets. 71 types or TCP SYN packets are returned. Otherwise, the input packets are 74 packets from the tables, when they want to get the GROed packets. 178 Header fields deciding if packets are neighbors include: [all …]
|
| H A D | packet_distrib_lib.rst | 10 which is responsible for load balancing or distributing packets, 34 #. As workers request packets, the distributor takes packets from the set of packets passed in and… 41 This ensures that no two packets with the same tag are processed in parallel, 42 and that all packets with the same tag are processed in input order. 44 #. Once all input packets passed to the process API have either been distributed to workers 58 It returns to the caller all packets which have finished processing by all worker cores. 59 Within this set of returned packets, all packets sharing the same tag will be returned in their ori… 62 If worker lcores buffer up packets internally for transmission in bulk afterwards, 63 the packets sharing a tag will likely get out of order. 66 who may then flush their buffered packets sooner and cause packets to get out of order. [all …]
|
| H A D | generic_segmentation_offload_lib.rst | 12 process a smaller number of large packets (e.g. MTU size of 64KB), instead of 13 processing higher numbers of small packets (e.g. MTU size of 1500B), thus 18 packets within the guest, and improves the data-to-overhead ratio of both the 25 packets in software. Note however, that GSO is implemented as a standalone 28 GSO library to segment packets, they also must call ``rte_pktmbuf_free()`` 35 #. The GSO library doesn't check if input packets have correct checksums. 38 packets (that task is left to the application). 42 #. The egress interface's driver must support multi-segment packets. 64 ``rte_gso_segment()`` function to segment packets. 162 VxLAN packets GSO supports segmentation of suitably large VxLAN packets, [all …]
|
| H A D | event_ethernet_rx_adapter.rst | 9 device port for receiving events that reference packets instead of polling Rx 11 be supported in hardware or require a software thread to receive packets from 71 parameter. Event information for packets from this Rx queue is encoded in the 153 packets. Certain queues may have low packet rates and it would be more 154 efficient to enable the Rx queue interrupt and read packets after receiving 166 invokes the ``rte_eth_rx_burst()`` to receive packets on the queue and 167 converts the received packets to events in the same manner as packets 181 dequeuing packets from the ethernet device. The application may want to 183 enqueue packets to the event device. The application may also use some other 184 criteria to decide which packets should enter the event device even when [all …]
|
| H A D | ipsec_lib.rst | 20 inbound and outbound IPsec packets. 33 introduces an asynchronous API for IPsec packets destined to be processed by 48 For packets destined for inline processing no extra overhead 64 * for inbound packets: 75 * for outbound packets: 97 * for inbound packets: 107 * for outbound packets: 122 * for inbound packets: 127 * for outbound packets: 139 * for inbound packets: [all …]
|
| H A D | packet_framework.rst | 78 …| 3 | IP reassembly | Input packets are either IP fragments or complete IP datagrams. Output pa… 83 …| | | packets. Output packets are non-jumbo packets. … 497 two packets (that just completed stage 0) are now executing stage 1 and two packets (next two packe… 499 The pipeline iterations continue until all packets from the burst of input packets execute the last… 521 before same packets 0 and 1 are used (i.e. before stage 1 is executed on packets 0 and 1), 522 different packets are used: packets 2 and 3 (executing stage 1), packets 4 and 5 (executing stage 2… 627 …| 0 | Prefetch packet meta-data | Select next two packets from the burst of input packets. … 698 If there are less than 7 packets in the burst of input packets, 701 …of the bucket search algorithm has been executed for all the packets in the burst of input packets, 907 …| 0 | Prefetch packet meta-data | #. Select next two packets from the burst of input packets. … [all …]
|
| /f-stack/dpdk/drivers/net/nfb/ |
| H A D | nfb_tx.h | 135 struct ndp_packet packets[nb_pkts]; in nfb_eth_ndp_tx() local 143 packets[i].data_length = bufs[i]->pkt_len; in nfb_eth_ndp_tx() 144 packets[i].header_length = 0; in nfb_eth_ndp_tx() 147 num_tx = ndp_tx_burst_get(ndp->queue, packets, nb_pkts); in nfb_eth_ndp_tx() 164 rte_memcpy(packets[i].data, in nfb_eth_ndp_tx() 171 dst = packets[i].data; in nfb_eth_ndp_tx()
|
| H A D | nfb_rx.h | 154 struct ndp_packet packets[nb_pkts]; in nfb_eth_ndp_rx() local 170 num_rx = ndp_rx_burst_get(ndp->queue, packets, nb_pkts); in nfb_eth_ndp_rx() 189 packet_size = packets[i].data_length; in nfb_eth_ndp_rx() 194 packets[i].data, packet_size); in nfb_eth_ndp_rx() 208 (packets[i].header + 4))); in nfb_eth_ndp_rx() 213 (packets[i].header + 8))); in nfb_eth_ndp_rx()
|
| /f-stack/dpdk/doc/guides/nics/ |
| H A D | kni.rst | 12 Sending packets to any DPDK controlled interface or sending to the 21 application, and DPDK application may forward packets to a physical NIC 41 When testpmd forwarding starts, any packets sent to ``kni0`` interface 130 RX packets 0 bytes 0 (0.0 B) 132 TX packets 0 bytes 0 (0.0 B) 137 RX packets 0 bytes 0 (0.0 B) 139 TX packets 0 bytes 0 (0.0 B) 158 RX-packets: 35637905 RX-dropped: 0 RX-total: 35637905 159 TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947 163 RX-packets: 35637915 RX-dropped: 0 RX-total: 35637915 [all …]
|
| H A D | pcap_ring.rst | 80 The driver captures only the incoming packets on that interface. 107 - Use the RX PCAP file to infinitely receive packets 119 - Drop all packets on transmit 127 - Receive no packets on Rx 138 Read packets from one pcap file and write them to another: 146 Read packets from a network interface and write them to a pcap file: 154 Read packets from a pcap file and write them to a network interface: 162 Forward packets through two network interfaces: 256 RX-packets: 462384736 RX-dropped: 0 RX-total: 462384736 257 TX-packets: 462384768 TX-dropped: 0 TX-total: 462384768 [all …]
|
| H A D | null.rst | 7 NULL PMD is a simple virtual driver mainly for testing. It always returns success for all packets f… 9 On Rx it returns requested number of empty packets (all zero). On Tx it just frees all sent packets. 42 Makes PMD more like ``/dev/null``. On Rx no packets received, on Tx all packets are freed.
|
| H A D | netvsc.rst | 19 * It supports merge-able buffers per packet when receiving packets and scattered buffer per packet 20 when transmitting packets. The packet size supported is from 64 to 65536. 22 * The PMD supports multicast packets and promiscuous mode subject to restrictions on the host. 39 and send and receive packets using the VF path. 131 multiple small packets into one request. If tx_copybreak is 0 then 133 set larger than the MTU, then all packets smaller than the chunk size 134 of the VMBus send buffer will be copied; larger packets always have to 139 mbuf for receiving packets. The default value is 0. (netvsc doesn't use 142 receiving packets, thus avoid copying memory. Use of external buffers
|
| /f-stack/freebsd/netgraph/ |
| H A D | ng_source.c | 393 uint64_t packets; in ng_source_rcvmsg() local 400 packets = *(uint64_t *)msg->data; in ng_source_rcvmsg() 696 sc->packets = packets; in ng_source_start() 730 int packets; in ng_source_intr() local 744 packets = sc->snd_queue.ifq_len; in ng_source_intr() 757 if (packets > maxpkt) in ng_source_intr() 758 packets = maxpkt; in ng_source_intr() 761 ng_source_send(sc, packets, NULL); in ng_source_intr() 762 if (sc->packets == 0) in ng_source_intr() 784 tosend = sc->packets; in ng_source_send() [all …]
|
| /f-stack/dpdk/doc/guides/howto/ |
| H A D | rte_flow.rst | 44 /* setting the eth to pass all packets */ 48 /* set the vlan to pass all packets */ 74 [waiting for packets] 93 [waiting for packets] 140 /* setting the eth to pass all packets */ 144 /* set the vlan to pass all packets */ 172 [waiting for packets] 194 [waiting for packets] 243 /* set the vlan to pas all packets */ 266 [waiting for packets] [all …]
|
| /f-stack/dpdk/drivers/net/octeontx2/ |
| H A D | otx2_rx.c | 55 uint16_t packets = 0, nb_pkts; in nix_recv_pkts() local 62 while (packets < nb_pkts) { in nix_recv_pkts() 74 rx_pkts[packets++] = mbuf; in nix_recv_pkts() 141 while (packets < pkts) { in nix_recv_pkts_vector() 145 pkts_left += (pkts - packets); in nix_recv_pkts_vector() 283 vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01); in nix_recv_pkts_vector() 300 packets += NIX_DESCS_PER_LOOP; in nix_recv_pkts_vector() 304 rxq->available -= packets; in nix_recv_pkts_vector() 308 otx2_write64((rxq->wdata | packets), rxq->cq_door); in nix_recv_pkts_vector() 311 packets += nix_recv_pkts(rx_queue, &rx_pkts[packets], in nix_recv_pkts_vector() [all …]
|
| /f-stack/dpdk/doc/guides/tools/ |
| H A D | pdump.rst | 60 …* Multiple instances of ``--pdump`` can be passed to capture packets on different port and queue c… 67 Port id of the eth device on which packets should be captured. 70 PCI address (or) name of the eth device on which packets should be captured. 74 * As of now the ``dpdk-pdump`` tool cannot capture the packets of virtual devices 80 Queue id of the eth device on which packets should be captured. The user can pass a queue value of … 91 * To receive ingress packets only, ``rx-dev`` should be passed. 93 * To receive egress packets only, ``tx-dev`` should be passed. 95 * To receive ingress and egress packets separately ``rx-dev`` and ``tx-dev`` 98 * To receive ingress and egress packets together, ``rx-dev`` and ``tx-dev`` 102 … This value is used internally for ring creation. The ring will be used to enqueue the packets from
|
| /f-stack/dpdk/doc/guides/eventdevs/ |
| H A D | opdl.rst | 10 All packets follow the same path through the device. The order in which\ 11 packets follow is determined by the order in which queues are set up.\ 12 Events are left on the ring until they are transmitted. As a result packets\ 55 asynchronous handling of packets in the middle of a pipeline. Ordered 56 queues in the middle of a pipeline cannot delete packets. 62 As stated the order in which packets travel through queues is static in 66 P3 then packets must be 92 - The order in which packets moved between queues is static and fixed \ 98 - All packets follow the same path through device queues.
|
| /f-stack/tools/libxo/tests/core/saved/ |
| H A D | test_02.T.out | 21 1010 packets here/there/everywhere 22 1010 packets here/there/everywhere 35 V1/V2 packets: 10
|
| /f-stack/dpdk/doc/guides/sample_app_ug/ |
| H A D | skeleton.rst | 211 printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", 217 * Receive packets on a port and forward them on the paired 222 /* Get burst of RX packets, from first port of pair. */ 230 /* Send burst of TX packets, to second port of pair. */ 234 /* Free any unsent packets. */ 252 /* Get burst of RX packets, from first port of pair. */ 260 /* Send burst of TX packets, to second port of pair. */ 264 /* Free any unsent packets. */ 285 The ``rte_eth_tx_burst()`` function frees the memory buffers of packets that 286 are transmitted. If packets fail to transmit, ``(nb_tx < nb_rx)``, then they
|
| H A D | rxtx_callbacks.rst | 9 packets. The application performs a simple latency check, using callbacks, to 10 determine the time packets spend within the application. 13 packets to add a timestamp. A separate callback is applied to all packets 143 all packets received: 169 packets prior to transmission: 197 The ``calc_latency()`` function accumulates the total number of packets and 198 the total number of cycles used. Once more than 100 million packets have been
|
| H A D | ioat.rst | 14 packets copies. 23 copy with copy done using a DMA device for different sizes of packets. 25 received/send packets and packets dropped or failed to copy. 64 * --[no-]mac-updating: Whether MAC address of packets should be changed 421 /* Free any not enqueued packets. */ 432 The packets are received in burst mode using ``rte_eth_rx_burst()`` 433 function. When using hardware copy mode the packets are enqueued in 435 ``rte_ioat_enqueue_copy()``. When all received packets are in the 483 /* Free any not enqueued packets. */ 502 /* Transmit packets from IOAT rawdev/rte_ring for one port. */ [all …]
|
| H A D | dist_app.rst | 15 The distributor application performs the distribution of packets that are received 72 The receive thread receives the packets using ``rte_eth_rx_burst()`` and will 73 enqueue them to an rte_ring. The distributor thread will dequeue the packets 83 worker threads do simple packet processing by requesting packets from 86 and then finally returning the packets back to the distributor thread. 89 ``rte_distributor_returned_pkts()`` to get the processed packets, and will enqueue 91 output port. The transmit thread will dequeue the packets from the ring and 132 statistics include the number of packets enqueued and dequeued at each stage 134 packets of each burst size (1-8) were sent to each worker thread.
|
| /f-stack/dpdk/doc/guides/platform/ |
| H A D | octeontx2.rst | 96 Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX. 117 SDP interface receives input packets from remote host from NIX-RX and sends packets 337 Received packets: 0 338 Octets of received packets: 0 339 Received PAUSE packets: 0 340 Received PAUSE and control packets: 0 341 Filtered DMAC0 (NIX-bound) packets: 0 345 Error packets: 0 346 Filtered DMAC1 (NCSI-bound) packets: 0 348 NCSI-bound packets dropped: 0 [all …]
|
| /f-stack/freebsd/net/altq/ |
| H A D | altq.h | 69 u_int64_t packets; member 74 do { (cntr)->packets++; (cntr)->bytes += len; } while (/*CONSTCOND*/ 0)
|
| /f-stack/dpdk/doc/guides/faq/ |
| H A D | faq.rst | 82 …empts to aggregate the cost of processing each packet individually by processing packets in bursts. 84 This allows the application to request 32 packets at a time from the PMD. 85 …n then immediately attempts to transmit all the packets that were received, in this case, all 32 p… 86 The packets are not transmitted until the tail pointer is updated on the corresponding TX queue of … 88 can be spread across 32 packets, effectively hiding the relatively slow MMIO cost of writing to the… 90 … because the first packet that was received must also wait for the other 31 packets to be received. 91 …e transmitted until the other 31 packets have also been processed because the NIC will not know to… 92 which is not done until all 32 packets have been processed for transmission. 94 …even under heavy system load, the application developer should avoid processing packets in bunches. 160 When trying to send packets from an application to itself, meaning smac==dmac, using Intel(R) 82599… [all …]
|