Home
last modified time | relevance | path

Searched refs:queues (Results 1 – 25 of 221) sorted by relevance

123456789

/dpdk/doc/guides/eventdevs/
H A Dopdl.rst11 packets follow is determined by the order in which queues are set up.\
27 * Load balanced (for Atomic, Ordered, Parallel queues)
28 * Single Link (for single-link queues)
56 queues in the middle of a pipeline cannot delete packets.
62 As stated the order in which packets travel through queues is static in
63 nature. They go through the queues in the order the queues are setup at
65 sets up 3 queues, Q0, Q1, Q2 and has 3 associated ports P0, P1, P2 and
86 due to the static nature of the underlying queues. It is because of this
92 - The order in which packets moved between queues is static and fixed \
98 - All packets follow the same path through device queues.
[all …]
H A Ddlb2.rst27 supports atomic, ordered, and parallel scheduling events from queues to ports.
41 directed queues, ports, credits, and other hardware resources. Some
73 queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
84 load-balanced queues can use the full 16-bit flow ID range.
96 queues, and max_single_link_event_port_queue_pairs reports the number of
97 available directed ports and queues.
106 directed ports and queues come in pairs.
150 load-balanced queues, and directed credits are used for directed queues.
284 of its ports or queues are not, the PMD will apply their previous
294 before its ports or queues can be.
[all …]
/dpdk/lib/bbdev/
H A Drte_bbdev.h405 struct rte_bbdev_queue_data *queues; /**< Queue structures */ member
479 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_enqueue_enc_ops()
509 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_enqueue_dec_ops()
539 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_enqueue_ldpc_enc_ops()
569 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_enqueue_ldpc_dec_ops()
601 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_dequeue_enc_ops()
633 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_dequeue_dec_ops()
664 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_dequeue_ldpc_enc_ops()
694 struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id]; in rte_bbdev_dequeue_ldpc_dec_ops()
H A Drte_bbdev.c341 if (dev->data->queues != NULL) { in rte_bbdev_setup_queues()
362 rte_free(dev->data->queues); in rte_bbdev_setup_queues()
369 if (dev->data->queues == NULL) { in rte_bbdev_setup_queues()
395 rte_free(dev->data->queues); in rte_bbdev_setup_queues()
396 dev->data->queues = NULL; in rte_bbdev_setup_queues()
576 dev->data->queues[i].started = true; in rte_bbdev_start()
631 rte_free(dev->data->queues); in rte_bbdev_close()
642 dev->data->queues = NULL; in rte_bbdev_close()
659 if (dev->data->queues[queue_id].started) { in rte_bbdev_queue_start()
716 &dev->data->queues[q_id].queue_stats; in get_stats_from_queues()
[all …]
/dpdk/doc/guides/sample_app_ug/
H A Dvmdq_dcb_forwarding.rst8 …e application performs L2 forwarding using VMDQ and DCB to divide the incoming traffic into queues.
17 The VMDQ and DCB filters work on MAC and VLAN traffic to divide the traffic into input queues on th…
20 Then, DCB places each packet into one of queues within that group, based upon the VLAN user priorit…
23 With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each threa…
24 multiple queues. When run with 8 threads, that is, with the -c FF option, each thread receives and …
27 …Ethernet Controller NIC also supports the splitting of traffic into 16 pools of 8 queues. While the
56 Since VMD queues are being used for VMM, this application works correctly
105 and dividing up the possible user priority values equally among the individual queues
109 With Intel® X710/XL710 NICs, if number of tcs is 4, and number of queues in pool is 8,
110 then the user priority fields are allocated 2 to one tc, and a tc has 2 queues mapping to it, then
[all …]
H A Dlink_status_intr.rst48 * -q NQ: A number of queues (=ports) per lcore (default is 1)
95 The next step is to configure the RX and TX queues.
97 The number of TX queues depends on the number of available lcores.
102 :start-after: Configure RX and TX queues. 8<
103 :end-before: >8 End of configure RX and TX queues.
155 which specifies the number of queues per lcore.
172 :end-before: >8 End of list of queues to be polled.
182 :end-before: >8 End of list of queues to be polled.
196 The global configuration for TX queues is stored in a static structure:
217 :start-after: Read packet from RX queues. 8<
[all …]
H A Dvmdq_forwarding.rst8 The application performs L2 forwarding using VMDq to divide the incoming traffic into queues.
17 …it the incoming packets up into different "pools" - each with its own set of RX queues - based upon
21 With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each threa…
22 multiple queues. When run with 8 threads, that is, with the -c FF option, each thread receives and …
24 As supplied, the sample application configures the VMDq feature to have 32 pools with 4 queues each.
25 …0 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 pools of 2 queues.
26 … or XL710 Ethernet Controller NICs support many configurations of VMDq pools of 4 or 8 queues each.
27 And queues numbers for each VMDq pool can be changed by setting RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
89 For the VLAN IDs, each one can be allocated to possibly multiple pools of queues.
H A Dl2_forward_real_virtual.rst109 * q NQ: A number of queues (=ports) per lcore (default is 1)
122 To run the application in linux environment with 4 lcores, 4 ports, 8 RX queues
204 The next step is to configure the RX and TX queues.
206 The number of TX queues depends on the number of available lcores.
211 :start-after: Configure the number of queues for a port.
212 :end-before: >8 End of configuration of the number of queues for a port.
221 which specifies the number of queues per lcore.
237 :start-after: List of queues to be polled for a given lcore. 8<
238 :end-before: >8 End of list of queues to be polled for a given lcore.
266 :start-after: Read packet from RX queues. 8<
[all …]
/dpdk/drivers/raw/cnxk_bphy/
H A Dcnxk_bphy.c47 unsigned int i, queues, descs; in bphy_rawdev_selftest() local
52 queues = rte_rawdev_queue_count(dev_id); in bphy_rawdev_selftest()
53 if (queues == 0) in bphy_rawdev_selftest()
55 if (queues != BPHY_QUEUE_CNT) in bphy_rawdev_selftest()
173 struct bphy_irq_queue *qp = &bphy_dev->queues[0]; in cnxk_bphy_irq_enqueue_bufs()
181 if (queue >= RTE_DIM(bphy_dev->queues)) in cnxk_bphy_irq_enqueue_bufs()
255 if (queue >= RTE_DIM(bphy_dev->queues)) in cnxk_bphy_irq_dequeue_bufs()
261 qp = &bphy_dev->queues[queue]; in cnxk_bphy_irq_dequeue_bufs()
277 return RTE_DIM(bphy_dev->queues); in cnxk_bphy_irq_queue_count()
H A Dcnxk_bphy_cgx_test.c40 unsigned int queues, i; in cnxk_bphy_cgx_dev_selftest() local
43 queues = rte_rawdev_queue_count(dev_id); in cnxk_bphy_cgx_dev_selftest()
44 if (queues == 0) in cnxk_bphy_cgx_dev_selftest()
51 for (i = 0; i < queues; i++) { in cnxk_bphy_cgx_dev_selftest()
H A Dcnxk_bphy_cgx.c23 struct cnxk_bphy_cgx_queue queues[MAX_LMACS_PER_CGX]; member
58 struct cnxk_bphy_cgx_queue *qp = &cgx->queues[queue]; in cnxk_bphy_cgx_process_buf()
189 qp = &cgx->queues[queue]; in cnxk_bphy_cgx_dequeue_bufs()
222 for (i = 0; i < RTE_DIM(cgx->queues); i++) { in cnxk_bphy_cgx_init_queues()
226 cgx->queues[cgx->num_queues++].lmac = i; in cnxk_bphy_cgx_init_queues()
236 rte_free(cgx->queues[i].rsp); in cnxk_bphy_cgx_fini_queues()
/dpdk/app/test/
H A Dtest_eventdev.c990 uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; in test_eventdev_link() local
1003 queues[i] = i; in test_eventdev_link()
1018 uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; in test_eventdev_unlink() local
1030 queues[i] = i; in test_eventdev_unlink()
1046 uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; in test_eventdev_link_get() local
1060 queues[i] = i; in test_eventdev_link_get()
1071 queues[i] = i; in test_eventdev_link_get()
1086 queues[0] = 0; in test_eventdev_link_get()
1103 queues[i] = i; in test_eventdev_link_get()
1112 queues[i] = i; in test_eventdev_link_get()
[all …]
/dpdk/drivers/event/dsw/
H A Ddsw_evdev.c101 struct dsw_queue *queue = &dsw->queues[queue_id]; in dsw_queue_setup()
170 const uint8_t queues[], uint16_t num, bool link) in dsw_port_link_unlink() argument
178 uint8_t qid = queues[i]; in dsw_port_link_unlink()
179 struct dsw_queue *q = &dsw->queues[qid]; in dsw_port_link_unlink()
194 dsw_port_link(struct rte_eventdev *dev, void *port, const uint8_t queues[], in dsw_port_link() argument
197 return dsw_port_link_unlink(dev, port, queues, num, true); in dsw_port_link()
201 dsw_port_unlink(struct rte_eventdev *dev, void *port, uint8_t queues[], in dsw_port_unlink() argument
204 return dsw_port_link_unlink(dev, port, queues, num, false); in dsw_port_unlink()
255 struct dsw_queue *queue = &dsw->queues[queue_id]; in initial_flow_to_port_assignment()
262 dsw->queues[queue_id].flow_to_port_map[flow_hash] = in initial_flow_to_port_assignment()
/dpdk/doc/guides/testpmd_app_ug/
H A Drun_app.rst273 Set the number of RX queues per port to N, where 1 <= N <= 65535.
283 Set the number of TX queues per port to N, where 1 <= N <= 65535.
295 number of TX queues and to the number of RX queues. then the first
378 feature is engaged. Affects only the queues configured
384 feature is engaged. Affects only the queues configured
401 Create queues in shared Rx queue mode if device supports.
429 configuration of rx and tx queues before device is started
608 For example, if testpmd is configured to have 4 Tx and Rx queues,
609 queues 0 and 1 will be used by the primary process and
610 queues 2 and 3 will be used by the secondary process.
[all …]
/dpdk/drivers/net/mlx5/
H A Dmlx5_rxq.c2180 if (mlx5_is_external_rxq(dev, queues[i])) in mlx5_rxqs_deref()
2211 if (mlx5_rxq_ref(dev, queues[i]) == NULL) in mlx5_rxqs_ref()
2217 mlx5_rxqs_deref(dev, queues, i); in mlx5_rxqs_ref()
2399 (!memcmp(ind_tbl->queues, queues, in mlx5_ind_table_obj_match_queues()
2426 (memcmp(ind_tbl->queues, queues, in mlx5_ind_table_obj_get()
2585 memcpy(ind_tbl->queues, queues, queues_n * sizeof(*queues)); in mlx5_ind_table_obj_new()
2671 mlx5_rxqs_deref(dev, queues, queues_n); in mlx5_ind_table_obj_modify()
2679 ind_tbl->queues = queues; in mlx5_ind_table_obj_modify()
2744 mlx5_rxq_release(dev, ind_tbl->queues[i]); in mlx5_ind_table_obj_detach()
2813 queues, queues_n)) { in mlx5_hrxq_modify()
[all …]
/dpdk/doc/guides/nics/
H A Dvhost.rst20 * It has multiple queues support.
37 #. ``queues``:
39 It is used to specify the number of queues virtio-net device has.
93 ./dpdk-testpmd -l 0-3 -n 4 --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
104 -netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
H A Ddpaa.rst48 - The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
146 - Multiple queues for TX and RX
208 This defines the number of Rx queues configured for an application, per
209 port. Hardware would distribute across these many number of queues on Rx
211 In case the application is configured to use lesser number of queues than
217 These queues use one private HW portal per queue configured, so they are
218 limited in the system. The first configured ethdev queues will be
219 automatically be assigned from the these high perf PUSH queues. Any queue
220 configuration beyond that will be standard Rx queues. The application can
224 Currently these queues are not used for LS1023/LS1043 platform by default.
[all …]
H A Dhns3.rst16 - Multiple queues for TX and RX
61 Number of MAX queues reserved for PF.
174 flows and route them to specific queues.
201 and configure queues.
203 Configure queues as queue 0, 1, 2, 3.
208 queues 0 1 2 3 end / end
215 actions rss types ipv4-tcp l3-src-only end queues end / end
222 actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
229 queues end func simple_xor / end
271 When the number of port queues corresponds to the number of CPU cores, the
/dpdk/drivers/raw/skeleton/
H A Dskeleton_rawdev.c145 skeldev->queues[i].depth = SKELETON_QUEUE_DEF_DEPTH; in reset_queues()
146 skeldev->queues[i].state = SKELETON_QUEUE_DETACH; in reset_queues()
240 skelq = &skeldev->queues[queue_id]; in skeleton_rawdev_queue_def_conf()
275 q = &skeldev->queues[queue_id]; in skeleton_rawdev_queue_setup()
303 skeldev->queues[queue_id].state = SKELETON_QUEUE_DETACH; in skeleton_rawdev_queue_release()
304 skeldev->queues[queue_id].depth = SKELETON_QUEUE_DEF_DEPTH; in skeleton_rawdev_queue_release()
614 skeldev->queues[i].state = SKELETON_QUEUE_DETACH; in skeleton_rawdev_create()
615 skeldev->queues[i].depth = SKELETON_QUEUE_DEF_DEPTH; in skeleton_rawdev_create()
/dpdk/drivers/net/iavf/
H A Diavf_ethdev.c2053 queues++; in iavf_parse_queue_proto_xtr()
2055 if (*queues != '[') { in iavf_parse_queue_proto_xtr()
2065 queues++; in iavf_parse_queue_proto_xtr()
2068 queues++; in iavf_parse_queue_proto_xtr()
2069 if (*queues == '\0') in iavf_parse_queue_proto_xtr()
2076 queues += strcspn(queues, ")"); in iavf_parse_queue_proto_xtr()
2082 queues += strcspn(queues, ":"); in iavf_parse_queue_proto_xtr()
2086 queues++; in iavf_parse_queue_proto_xtr()
2105 queues += idx; in iavf_parse_queue_proto_xtr()
2107 while (isblank(*queues) || *queues == ',' || *queues == ']') in iavf_parse_queue_proto_xtr()
[all …]
/dpdk/drivers/raw/ioat/
H A Ddpdk_idxd_cfg.py66 def configure_dsa(dsa_id, queues, prefix): argument
75 nb_queues = min(queues, max_queues)
76 if queues > nb_queues:
/dpdk/drivers/dma/idxd/
H A Ddpdk_idxd_cfg.py66 def configure_dsa(dsa_id, queues, prefix): argument
75 nb_queues = min(queues, max_queues)
76 if queues > nb_queues:
/dpdk/drivers/event/opdl/
H A Dopdl_evdev.c90 const uint8_t queues[], in opdl_port_link() argument
103 queues[0], in opdl_port_link()
125 queues[0]); in opdl_port_link()
136 queues[0]); in opdl_port_link()
141 p->external_qid = queues[0]; in opdl_port_link()
149 uint8_t queues[], in opdl_port_unlink() argument
154 RTE_SET_USED(queues); in opdl_port_unlink()
161 queues[0], in opdl_port_unlink()
/dpdk/doc/guides/dmadevs/
H A Didxd.rst51 and the work-queues, which are used by applications to assign work to the device,
59 To assign work queues to groups for passing descriptors to the engines a similar accel-config comma…
60 However, the work queues also need to be configured depending on the use case.
63 * mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneous…
84 Every Intel\ |reg| DSA instance supports multiple queues and each should be similarly configured.
85 As a further example, the following set of commands will configure and enable 4 queues on instance …
94 # configure 4 queues, putting each in a different group, so each
105 # enable device and queues
129 among the queues.
143 use a subset of configured queues.
/dpdk/lib/eventdev/
H A Drte_eventdev.c930 if (queues == NULL) { in rte_event_port_link()
934 queues = queues_list; in rte_event_port_link()
946 if (queues[i] >= dev->data->nb_queues) { in rte_event_port_link()
952 queues, priorities, nb_links); in rte_event_port_link()
994 if (queues == NULL) { in rte_event_port_unlink()
1003 queues = all_queues; in rte_event_port_unlink()
1006 if (links_map[queues[j]] == in rte_event_port_unlink()
1014 if (queues[i] >= dev->data->nb_queues) { in rte_event_port_unlink()
1020 queues, nb_unlinks); in rte_event_port_unlink()
1056 uint8_t queues[], uint8_t priorities[]) in rte_event_port_links_get() argument
[all …]

123456789