Searched refs:TX (Results 1 – 25 of 111) sorted by relevance
12345
13 pipeline TX period 10 offset_port_id 014 pipeline TX port in bsz 32 swq TXQ015 pipeline TX port out bsz 32 link LINK txq 016 pipeline TX table match stub17 pipeline TX port in 0 table 018 pipeline TX table 0 rule add match default action fwd port 021 thread 1 pipeline TX enable
4 OCTEON TX Board Support Package7 This doc has information about steps to setup OCTEON TX platform9 **Cavium OCTEON TX** SoC family.34 OCTEON TX compatible board:36 1. **OCTEON TX Linux kernel PF driver for Network acceleration HW blocks**67 Setup Platform Using OCTEON TX SDK70 The OCTEON TX platform drivers can be compiled either natively on71 **OCTEON TX** :sup:`®` board or cross-compiled on an x86 based platform.74 OCTEON TX SDK 6.2.0 patch 3. In this, the PF drivers for all hardware92 Once the target is ready for native compilation, the OCTEON TX platform[all …]
151 pipeline TX table match stub152 pipeline TX port in 0 table 0155 thread 2 pipeline TX enable178 TX queue: 0179 TX desc=512 - TX free threshold=32181 TX offloads=0x0 - TX RS bit threshold=32188 TX queue: 0189 TX desc=0 - TX free threshold=0191 TX offloads=0x0 - TX RS bit threshold=0356 pipeline TX port in 0 table 0[all …]
132 TX packets 0 bytes 0 (0.0 B)133 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0139 TX packets 0 bytes 0 (0.0 B)140 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0159 TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947164 TX-packets: 35637937 TX-dropped: 0 TX-total: 35637937169 TX-packets: 71275884 TX-dropped: 0 TX-total: 71275884
4 OCTEON TX Poll Mode driver7 The OCTEON TX ETHDEV PMD (**librte_net_octeontx**) provides poll mode ethdev8 driver support for the inbuilt network device found in the **Cavium OCTEON TX**17 Features of the OCTEON TX Ethdev PMD are:28 - Multiple queues for TX32 Supported OCTEON TX SoCs43 - Scattered and gather for TX and RX111 The OCTEON TX ethdev PMD is exposed as a vdev device which consists of a set145 The OCTEON TX SoC family NIC has inbuilt HW assisted external mempool manager.148 recycling on OCTEON TX SoC platform.[all …]
4 OCTEON TX EP Poll Mode driver7 The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode9 and **Cavium OCTEON TX** families of adapters in SR-IOV context.
23 appropriate FTAG is inserted for every frame on TX side.34 There is no change to the PMD API. The RX/TX handlers are the only two entries for35 vPMD packet I/O. They are transparently registered at runtime RX/TX execution39 packet transfers. The following sections explain RX and TX constraints in the99 TX Constraint102 Features not Supported by TX Vector PMD105 TX vPMD only works when offloads is set to 0107 This means that it does not support any TX offload.
123 PMD: Available DMA queues RX: 8 TX: 8137 TX queues=2 - TX desc=512 - TX free threshold=0138 TX threshold registers: pthresh=0 hthresh=0 wthresh=0139 TX RS bit threshold=0 - TXQ flags=0x0
4 Cavium OCTEON TX Crypto Poll Mode Driver7 The OCTEON TX crypto poll mode driver provides support for offloading9 **OCTEON TX** :sup:`®` family of processors (CN8XXX). The OCTEON TX crypto66 The OCTEON TX crypto poll mode driver can be compiled either natively on67 **OCTEON TX** :sup:`®` board or cross-compiled on an x86 based platform.74 OCTEON TX crypto PF driver needs microcode to be available at `/lib/firmware/` directory.90 by using dpdk-devbind.py script. The OCTEON TX crypto PF device need to be113 OCTEON TX crypto PMD.123 The symmetric crypto operations on OCTEON TX crypto PMD may be verified by running the test131 The asymmetric crypto operations on OCTEON TX crypto PMD may be verified by running the test
4 OCTEON TX ZIP Compression Poll Mode Driver7 The OCTEON TX ZIP PMD (**librte_compress_octeontx**) provides poll mode9 **Cavium OCTEON TX** SoC family.17 OCTEON TX ZIP PMD has support for:37 Supported OCTEON TX SoCs45 OCTEON TX SDK includes kernel image which provides OCTEON TX ZIP PF54 For more information on building and booting linux kernel on OCTEON TX62 The OCTEON TX zip is exposed as pci device which consists of a set of
4 OCTEON TX FPAVF Mempool Driver7 The OCTEON TX FPAVF PMD (**librte_mempool_octeontx**) is a mempool8 driver for offload mempool device found in **Cavium OCTEON TX** SoC17 Features of the OCTEON TX FPAVF PMD are:23 Supported OCTEON TX SoCs55 The OCTEON TX fpavf mempool initialization similar to other mempool
39 * ``-t1``: core mask 0x1 for TX122 worker 12 thread done. RX=4966581 TX=4966581123 worker 13 thread done. RX=4963329 TX=4963329124 worker 14 thread done. RX=4953614 TX=4953614125 worker 0 thread done. RX=0 TX=0126 worker 11 thread done. RX=4970549 TX=4970549127 worker 10 thread done. RX=4986391 TX=4986391128 worker 9 thread done. RX=4970528 TX=4970528129 worker 15 thread done. RX=4974087 TX=4974087130 worker 8 thread done. RX=4979908 TX=4979908[all …]
27 If a separate TX core is used, these are sent to the TX ring.28 Otherwise, they are sent directly to the TX port.29 The TX thread, if present, reads from the TX ring and write the packets to the TX port.64 * --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration.66 having 4 or 5 items (if TX core defined or not).84 * C = Size (in number of buffer descriptors) of each of the NIC TX rings written96 * D = Worker lcore write burst size to the NIC TX (the default value is 64)108 * --tth "A, B, C": TX queue threshold parameters110 * A = TX prefetch threshold (the default value is 36)112 * B = TX host threshold (the default value is 0)[all …]
4 RX/TX Callbacks Sample Application7 The RX/TX Callbacks sample application is a packet forwarding application that53 The sections below explain the additional RX/TX callback code.81 The RX and TX callbacks are added to the ports/queues as function pointers:85 :start-after: RX and TX callbacks are added to the ports. 8<86 :end-before: >8 End of RX and TX callbacks.115 The ``calc_latency()`` callback is added to the TX port and is applied to all120 :start-after: Callback is added to the TX port. 8<
95 The next step is to configure the RX and TX queues.97 The number of TX queues depends on the number of available lcores.102 :start-after: Configure RX and TX queues. 8<103 :end-before: >8 End of configure RX and TX queues.184 TX Queue Initialization188 For every port, a single TX queue is initialized.193 :end-before: >8 End of init one TX queue.242 to send all the received packets on the same TX port using255 :end-before: >8 End of Enqueuing packets for TX.263 :start-after: Draining TX queue in its main loop. 8<[all …]
27 * The source MAC address is replaced by the TX port MAC address170 The next step is to configure the RX and TX queues.177 :start-after: Configure the RX and TX queues. 8<178 :end-before: >8 End of configuring the RX and TX queues.215 TX Queue Initialization222 :start-after: Init one TX queue on each port. 8<223 :end-before: >8 End of init one TX queue on each port.329 to send all the received packets on the same TX port,342 :end-before: >8 End of Enqueuing packets for TX.351 :start-after: Draining TX queue of each port. 8<[all …]
204 The next step is to configure the RX and TX queues.206 The number of TX queues depends on the number of available lcores.245 TX Queue Initialization248 Each lcore should be able to transmit on any port. For every port, a single TX queue is initialized.252 :start-after: Init one TX queue on each port. 8<253 :end-before: >8 End of init one TX queue on each port.294 to send all the received packets on the same TX port,306 :start-after: Enqueue packets for TX and prepare them to be sent. 8<307 :end-before: >8 End of Enqueuing packets for TX.315 :start-after: Drains TX queue in its main loop. 8<[all …]
194 one RX queue (only one lcore is able to poll a given port). The number of TX200 :start-after: Configure RX and TX queue. 8<201 :end-before: >8 End of configuration RX and TX queue.234 TX Queue Initialization237 Each lcore should be able to transmit on any port. For every port, a single TX242 :start-after: Init one TX queue on each port. 8<243 :end-before: >8 End of init one TX queue on each port.337 :start-after: Gets service ID for RX/TX adapters. 8<338 :end-before: >8 End of get service ID for RX/TX adapters.397 :start-after: Draining TX queue in main loop. 8<[all …]
73 * :doc:`RX/TX callbacks Application<rxtx_callbacks>`: The RX/TX77 (packet arrival) and TX (packet transmission) by adding callbacks to the RX78 and TX packet processing functions.
4 OCTEON TX SSOVF Eventdev Driver7 The OCTEON TX SSOVF PMD (**librte_event_octeontx**) provides poll mode8 eventdev driver support for the inbuilt event device found in the **Cavium OCTEON TX**17 Features of the OCTEON TX SSOVF PMD are:35 Supported OCTEON TX SoCs48 The OCTEON TX eventdev is exposed as a vdev device which consists of a set110 Max number of events in OCTEON TX Eventdev (SSO) are only limited by DRAM size
95 Enable NUMA-aware allocation of RX/TX rings and of RX memory buffers109 Where flag is 1 for RX, 2 for TX, and 3 for RX and TX.283 Set the number of TX queues per port to N, where 1 <= N <= 65535.288 Set the number of descriptors in the TX rings to N, where N > 0.295 number of TX queues and to the number of RX queues. then the first297 binded to the second TX hairpin and so on. The index of the first299 The index of the first TX hairpin queue is the number of TX queues353 Set the host threshold register of TX rings to N, where N >= 0.392 Set TX segment sizes or total packet length. Valid for ``tx-only``464 Set the hexadecimal bitmask of TX queue offloads.[all …]
73 * Added defaults for i210 RX/TX PBSIZE101 * Added TX Scheduling related AQ commands108 * **Added i40e vector RX/TX.**126 * **Added fm10k vector RX/TX.**141 * **Improved enic TX packet rate.**170 * Simple TX204 * Added check on RX queues and TX queues of each link270 TX idle period for entering K1 should be 128 ns.271 Minimum TX idle period in K1 should be 256 ns.412 * **mlx4: Fixed TX loss after initialization.**[all …]
11 ; | | +------------->| (CORE A) | | TX |22 ; | |-------|-|-----+ | | +---|------->| (CORE B) | | TX |33 ; +----------+ | +-|-------|-----------|-|----->| (CORE C) | | TX |44 ; +-----------------------|------->| (CORE D) | | TX |
110 RX-TX port and associated cores :numref:`dtg_rx_tx_drop`.116 RX-TX drops129 #. At TX131 * If the TX rate is falling behind the application fill rate, identify if132 there are enough descriptors with ``rte_eth_dev_info_get`` for TX.138 * If oerrors are getting incremented, TX packet validations are failing.207 * CPU thread responsible for TX is not able to keep up with the burst of379 Traffic Manager on TX interface :numref:`dtg_qos_tx`.385 Traffic Manager just before TX.411 Capture points of Traffic at RX-TX.[all …]
39 enum enetc_bdr_type {TX, RX}; enumerator182 #define enetc_txbdr_rd(hw, n, off) enetc_bdr_rd(hw, TX, n, off)185 enetc_bdr_wr(hw, TX, n, off, val)