xref: /f-stack/dpdk/doc/guides/eventdevs/dlb.rst (revision 2d9fd380)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2020 Intel Corporation.
3
4Driver for the Intel® Dynamic Load Balancer (DLB)
5=================================================
6
7The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer.
8
9Prerequisites
10-------------
11
12Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup
13the basic DPDK environment.
14
15Configuration
16-------------
17
18The DLB PF PMD is a user-space PMD that uses VFIO to gain direct
19device access. To use this operation mode, the PCIe PF device must be bound
20to a DPDK-compatible VFIO driver, such as vfio-pci.
21
22Eventdev API Notes
23------------------
24
25The DLB provides the functions of a DPDK event device; specifically, it
26supports atomic, ordered, and parallel scheduling events from queues to ports.
27However, the DLB hardware is not a perfect match to the eventdev API. Some DLB
28features are abstracted by the PMD (e.g. directed ports), some are only
29accessible as vdev command-line parameters, and certain eventdev features are
30not supported (e.g. the event flow ID is not maintained during scheduling).
31
32In general the dlb PMD is designed for ease-of-use and does not require a
33detailed understanding of the hardware, but these details are important when
34writing high-performance code. This section describes the places where the
35eventdev API and DLB misalign.
36
37Scheduling Domain Configuration
38~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
39
40There are 32 scheduling domainis the DLB.
41When one is configured, it allocates load-balanced and
42directed queues, ports, credits, and other hardware resources. Some
43resource allocations are user-controlled -- the number of queues, for example
44-- and others, like credit pools (one directed and one load-balanced pool per
45scheduling domain), are not.
46
47The DLB is a closed system eventdev, and as such the ``nb_events_limit`` device
48setup argument and the per-port ``new_event_threshold`` argument apply as
49defined in the eventdev header file. The limit is applied to all enqueues,
50regardless of whether it will consume a directed or load-balanced credit.
51
52Reconfiguration
53~~~~~~~~~~~~~~~
54
55The Eventdev API allows one to reconfigure a device, its ports, and its queues
56by first stopping the device, calling the configuration function(s), then
57restarting the device. The DLB does not support configuring an individual queue
58or port without first reconfiguring the entire device, however, so there are
59certain reconfiguration sequences that are valid in the eventdev API but not
60supported by the PMD.
61
62Specifically, the PMD supports the following configuration sequence:
631. Configure and start the device
642. Stop the device
653. (Optional) Reconfigure the device
664. (Optional) If step 3 is run:
67
68   a. Setup queue(s). The reconfigured queue(s) lose their previous port links.
69   b. The reconfigured port(s) lose their previous queue links.
70
715. (Optional, only if steps 4a and 4b are run) Link port(s) to queue(s)
726. Restart the device. If the device is reconfigured in step 3 but one or more
73   of its ports or queues are not, the PMD will apply their previous
74   configuration (including port->queue links) at this time.
75
76The PMD does not support the following configuration sequences:
771. Configure and start the device
782. Stop the device
793. Setup queue or setup port
804. Start the device
81
82This sequence is not supported because the event device must be reconfigured
83before its ports or queues can be.
84
85Load-Balanced Queues
86~~~~~~~~~~~~~~~~~~~~
87
88A load-balanced queue can support atomic and ordered scheduling, or atomic and
89unordered scheduling, but not atomic and unordered and ordered scheduling. A
90queue's scheduling types are controlled by the event queue configuration.
91
92If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the
93``nb_atomic_order_sequences`` determines the supported scheduling types.
94With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic
95and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is
96supported by scheduling those events as ordered events.  Note that when the
97event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if
98``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and
99unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported.
100
101If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type
102dictates the queue's scheduling type.
103
104The ``nb_atomic_order_sequences`` queue configuration field sets the ordered
105queue's reorder buffer size.  DLB has 4 groups of ordered queues, where each
106group is configured to contain either 1 queue with 1024 reorder entries, 2
107queues with 512 reorder entries, and so on down to 32 queues with 32 entries.
108
109When a load-balanced queue is created, the PMD will configure a new sequence
110number group on-demand if num_sequence_numbers does not match a pre-existing
111group with available reorder buffer entries. If all sequence number groups are
112in use, no new group will be created and queue configuration will fail. (Note
113that when the PMD is used with a virtual DLB device, it cannot change the
114sequence number configuration.)
115
116The queue's ``nb_atomic_flows`` parameter is ignored by the DLB PMD, because
117the DLB does not limit the number of flows a queue can track. In the DLB, all
118load-balanced queues can use the full 16-bit flow ID range.
119
120Load-balanced and Directed Ports
121~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
122
123DLB ports come in two flavors: load-balanced and directed. The eventdev API
124does not have the same concept, but it has a similar one: ports and queues that
125are singly-linked (i.e. linked to a single queue or port, respectively).
126
127The ``rte_event_dev_info_get()`` function reports the number of available
128event ports and queues (among other things). For the DLB PMD, max_event_ports
129and max_event_queues report the number of available load-balanced ports and
130queues, and max_single_link_event_port_queue_pairs reports the number of
131available directed ports and queues.
132
133When a scheduling domain is created in ``rte_event_dev_configure()``, the user
134specifies ``nb_event_ports`` and ``nb_single_link_event_port_queues``, which
135control the total number of ports (load-balanced and directed) and the number
136of directed ports. Hence, the number of requested load-balanced ports is
137``nb_event_ports - nb_single_link_event_ports``. The ``nb_event_queues`` field
138specifies the total number of queues (load-balanced and directed). The number
139of directed queues comes from ``nb_single_link_event_port_queues``, since
140directed ports and queues come in pairs.
141
142When a port is setup, the ``RTE_EVENT_PORT_CFG_SINGLE_LINK`` flag determines
143whether it should be configured as a directed (the flag is set) or a
144load-balanced (the flag is unset) port. Similarly, the
145``RTE_EVENT_QUEUE_CFG_SINGLE_LINK`` queue configuration flag controls
146whether it is a directed or load-balanced queue.
147
148Load-balanced ports can only be linked to load-balanced queues, and directed
149ports can only be linked to directed queues. Furthermore, directed ports can
150only be linked to a single directed queue (and vice versa), and that link
151cannot change after the eventdev is started.
152
153The eventdev API does not have a directed scheduling type. To support directed
154traffic, the dlb PMD detects when an event is being sent to a directed queue
155and overrides its scheduling type. Note that the originally selected scheduling
156type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
157will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
158port.
159
160Flow ID
161~~~~~~~
162
163The flow ID field is not preserved in the event when it is scheduled in the
164DLB, because the DLB hardware control word format does not have sufficient
165space to preserve every event field. As a result, the flow ID specified with
166the enqueued event will not be in the dequeued event. If this field is
167required, the application should pass it through an out-of-band path (for
168example in the mbuf's udata64 field, if the event points to an mbuf) or
169reconstruct the flow ID after receiving the event.
170
171Also, the DLB hardware control word supports a 16-bit flow ID. Since struct
172rte_event's flow_id field is 20 bits, the DLB PMD drops the most significant
173four bits from the event's flow ID.
174
175Hardware Credits
176~~~~~~~~~~~~~~~~
177
178DLB uses a hardware credit scheme to prevent software from overflowing hardware
179event storage, with each unit of storage represented by a credit. A port spends
180a credit to enqueue an event, and hardware refills the ports with credits as the
181events are scheduled to ports. Refills come from credit pools, and each port is
182a member of a load-balanced credit pool and a directed credit pool. The
183load-balanced credits are used to enqueue to load-balanced queues, and directed
184credits are used for directed queues.
185
186A DLB eventdev contains one load-balanced and one directed credit pool. These
187pools' sizes are controlled by the nb_events_limit field in struct
188rte_event_dev_config. The load-balanced pool is sized to contain
189nb_events_limit credits, and the directed pool is sized to contain
190nb_events_limit/4 credits. The directed pool size can be overridden with the
191num_dir_credits vdev argument, like so:
192
193    .. code-block:: console
194
195       --vdev=dlb1_event,num_dir_credits=<value>
196
197This can be used if the default allocation is too low or too high for the
198specific application needs. The PMD also supports a vdev arg that limits the
199max_num_events reported by rte_event_dev_info_get():
200
201    .. code-block:: console
202
203       --vdev=dlb1_event,max_num_events=<value>
204
205By default, max_num_events is reported as the total available load-balanced
206credits. If multiple DLB-based applications are being used, it may be desirable
207to control how many load-balanced credits each application uses, particularly
208when application(s) are written to configure nb_events_limit equal to the
209reported max_num_events.
210
211Each port is a member of both credit pools. A port's credit allocation is
212defined by its low watermark, high watermark, and refill quanta. These three
213parameters are calculated by the dlb PMD like so:
214
215- The load-balanced high watermark is set to the port's enqueue_depth.
216  The directed high watermark is set to the minimum of the enqueue_depth and
217  the directed pool size divided by the total number of ports.
218- The refill quanta is set to half the high watermark.
219- The low watermark is set to the minimum of 16 and the refill quanta.
220
221When the eventdev is started, each port is pre-allocated a high watermark's
222worth of credits. For example, if an eventdev contains four ports with enqueue
223depths of 32 and a load-balanced credit pool size of 4096, each port will start
224with 32 load-balanced credits, and there will be 3968 credits available to
225replenish the ports. Thus, a single port is not capable of enqueueing up to the
226nb_events_limit (without any events being dequeued), since the other ports are
227retaining their initial credit allocation; in short, all ports must enqueue in
228order to reach the limit.
229
230If a port attempts to enqueue and has no credits available, the enqueue
231operation will fail and the application must retry the enqueue. Credits are
232replenished asynchronously by the DLB hardware.
233
234Software Credits
235~~~~~~~~~~~~~~~~
236
237The DLB is a "closed system" event dev, and the DLB PMD layers a software
238credit scheme on top of the hardware credit scheme in order to comply with
239the per-port backpressure described in the eventdev API.
240
241The DLB's hardware scheme is local to a queue/pipeline stage: a port spends a
242credit when it enqueues to a queue, and credits are later replenished after the
243events are dequeued and released.
244
245In the software credit scheme, a credit is consumed when a new (.op =
246RTE_EVENT_OP_NEW) event is injected into the system, and the credit is
247replenished when the event is released from the system (either explicitly with
248RTE_EVENT_OP_RELEASE or implicitly in dequeue_burst()).
249
250In this model, an event is "in the system" from its first enqueue into eventdev
251until it is last dequeued. If the event goes through multiple event queues, it
252is still considered "in the system" while a worker thread is processing it.
253
254A port will fail to enqueue if the number of events in the system exceeds its
255``new_event_threshold`` (specified at port setup time). A port will also fail
256to enqueue if it lacks enough hardware credits to enqueue; load-balanced
257credits are used to enqueue to a load-balanced queue, and directed credits are
258used to enqueue to a directed queue.
259
260The out-of-credit situations are typically transient, and an eventdev
261application using the DLB ought to retry its enqueues if they fail.
262If enqueue fails, DLB PMD sets rte_errno as follows:
263
264- -ENOSPC: Credit exhaustion (either hardware or software)
265- -EINVAL: Invalid argument, such as port ID, queue ID, or sched_type.
266
267Depending on the pipeline the application has constructed, it's possible to
268enter a credit deadlock scenario wherein the worker thread lacks the credit
269to enqueue an event, and it must dequeue an event before it can recover the
270credit. If the worker thread retries its enqueue indefinitely, it will not
271make forward progress. Such deadlock is possible if the application has event
272"loops", in which an event in dequeued from queue A and later enqueued back to
273queue A.
274
275Due to this, workers should stop retrying after a time, release the events it
276is attempting to enqueue, and dequeue more events. It is important that the
277worker release the events and don't simply set them aside to retry the enqueue
278again later, because the port has limited history list size (by default, twice
279the port's dequeue_depth).
280
281Priority
282~~~~~~~~
283
284The DLB supports event priority and per-port queue service priority, as
285described in the eventdev header file. The DLB does not support 'global' event
286queue priority established at queue creation time.
287
288DLB supports 8 event and queue service priority levels. For both priority
289types, the PMD uses the upper three bits of the priority field to determine the
290DLB priority, discarding the 5 least significant bits. The 5 least significant
291event priority bits are not preserved when an event is enqueued.
292
293Atomic Inflights Allocation
294~~~~~~~~~~~~~~~~~~~~~~~~~~~
295
296In the last stage prior to scheduling an atomic event to a CQ, DLB holds the
297inflight event in a temporary buffer that is divided among load-balanced
298queues. If a queue's atomic buffer storage fills up, this can result in
299head-of-line-blocking. For example:
300
301- An LDB queue allocated N atomic buffer entries
302- All N entries are filled with events from flow X, which is pinned to CQ 0.
303
304Until CQ 0 releases 1+ events, no other atomic flows for that LDB queue can be
305scheduled. The likelihood of this case depends on the eventdev configuration,
306traffic behavior, event processing latency, potential for a worker to be
307interrupted or otherwise delayed, etc.
308
309By default, the PMD allocates 16 buffer entries for each load-balanced queue,
310which provides an even division across all 128 queues but potentially wastes
311buffer space (e.g. if not all queues are used, or aren't used for atomic
312scheduling).
313
314The PMD provides a dev arg to override the default per-queue allocation. To
315increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
316
317    .. code-block:: console
318
319       --vdev=dlb1_event,atm_inflights=64
320
321Deferred Scheduling
322~~~~~~~~~~~~~~~~~~~
323
324The DLB PMD's default behavior for managing a CQ is to "pop" the CQ once per
325dequeued event before returning from rte_event_dequeue_burst(). This frees the
326corresponding entries in the CQ, which enables the DLB to schedule more events
327to it.
328
329To support applications seeking finer-grained scheduling control -- for example
330deferring scheduling to get the best possible priority scheduling and
331load-balancing -- the PMD supports a deferred scheduling mode. In this mode,
332the CQ entry is not popped until the *subsequent* rte_event_dequeue_burst()
333call. This mode only applies to load-balanced event ports with dequeue depth of
3341.
335
336To enable deferred scheduling, use the defer_sched vdev argument like so:
337
338    .. code-block:: console
339
340       --vdev=dlb1_event,defer_sched=on
341
342