Lines Matching refs:threads

18 different threads, and makes it possible to assign individual threads to
24 #. When there are multiple EAL threads per physical core.
25 #. When there are multiple lightweight threads per EAL thread.
33 threads the performance thread sample introduces the possibility to run the
34 application threads as lightweight threads (L-threads) within one or
35 more EAL threads.
76 NIC RX ports and queues handled by the RX lcores and threads. The parameters
79 * ``--tx (lcore,thread)[,(lcore,thread)]``: the list of TX threads identifying
136 Running with L-threads
140 in ``--rx/--tx`` are used to affinitize threads to the selected scheduler.
148 The following places RX l-threads on lcore 0 and TX l-threads on lcore 1 and 2
156 Running with EAL threads
160 off and EAL threads are used for all processing. EAL threads are enumerated in
161 the same way as L-threads, but the ``--lcores`` EAL parameter is used to
162 affinitize threads to the selected cpu-set (scheduler). Thus it is possible to
173 To affinitize two or more EAL threads to one cpu-set, the EAL ``--lcores``
176 The following places RX EAL threads on lcore 0 and TX EAL threads on lcore 1
188 For selected scenarios the command line configuration of the application for L-threads
189 and its corresponding EAL threads command line can be realized as follows:
204 b) Start all threads on one core (N:1).
206 Start 4 L-threads on lcore 0::
212 Start 4 EAL threads on cpu-set 0::
219 c) Start threads on different cores (N:M).
221 Start 2 L-threads for RX on lcore 0, and 2 L-threads for TX on lcore 1::
227 Start 2 EAL threads for RX on cpu-set 0, and 2 EAL threads for TX on
246 Mode of operation with EAL threads
250 into two different threads, and the RX and TX threads are
251 interconnected via software rings. With respect to these rings the RX threads
252 are producers and the TX threads are consumers.
254 On initialization the TX and RX threads are started according to the command
257 The RX threads poll the network interface queues and post received packets to a
260 The TX threads poll software rings, perform the L3 forwarding hash/LPM match,
268 The diagram below illustrates a case with two RX threads and three TX threads.
275 Mode of operation with L-threads
279 functionality into different threads, and the pairs of RX and TX threads are
284 The L-thread started on the main EAL thread then spawns other L-threads on
287 The RX threads poll the network interface queues and post received packets
299 The worker threads poll the software rings, perform L3 route lookup and
307 This design means that L-threads that have no work, can yield the CPU to other
308 L-threads and avoid having to constantly poll the software rings.
310 The diagram below illustrates a case with two RX threads and three TX functions
327 When enabled statistics are gathered by having the application threads set and
346 functions to run as cooperative threads within a single EAL thread.
351 performance and porting considerations when using L-threads.
356 Comparison between L-threads and POSIX pthreads
360 way in which threads are scheduled. The simplest way to think about this is to
361 consider the case of a processor with a single CPU. To run multiple threads
362 on a single CPU, the scheduler must frequently switch between the threads,
376 create and synchronize threads. Scheduling policy is determined by the host OS,
378 thread should be run next, threads may suspend themselves or make other threads
381 is ready to run. To complicate matters further threads may be assigned
385 L-thread scheduler performs the same multiplexing function for L-threads
390 not at all involved in the scheduling of L-threads.
393 L-threads are scheduled cooperatively. L-threads cannot not preempt each
396 L-threads must possess frequent rescheduling points, meaning that they must
398 intervals, in order to allow other L-threads an opportunity to proceed.
400 In both models switching between threads requires that the current CPU
418 The scheduling policy for L-threads is fixed, there is no prioritization of
419 L-threads, all L-threads are equal and scheduling is based on a FIFO
424 pointers to threads that are ready to run. The L-thread scheduler is a simple
434 and forth the between L-threads and scheduler loop.
441 With L-threads the progress of any particular thread is determined by the
442 frequency of rescheduling opportunities in the other L-threads. This means that
443 an errant L-thread monopolizing the CPU might cause scheduling of other threads
445 rescheduling to ensure progress of other threads, if managed sensibly, is not
453 With pthreads preemption means that threads that share data must observe
456 The fact that L-threads cannot preempt each other means that in many cases
472 pthread model where threads can be affinitized to run on any CPU. With isolated
476 The L-thread subsystem makes it possible for L-threads to migrate between
478 that threads that share data end up running on different CPUs then this will
482 threads running on different cores, however to protect other kinds of shared
497 As with applications written for pthreads an application written for L-threads
511 Constraints and performance implications when using L-threads
523 features available to pthreads are implemented for L-threads.
526 that can be associated with threads, mutexes, and condition variables.
529 cannot be varied, and L-threads cannot be prioritized. There are no variable
530 attributes associated with any L-thread objects. L-threads, mutexes and
651 Neither lthread signal nor broadcast may be called concurrently by L-threads
652 running on different schedulers, although multiple L-threads running in the
653 same scheduler may freely perform signal or broadcast operations. L-threads
675 An L-thread can only detach itself, and cannot detach other L-threads.
683 (for waiters) and do not use an associated mutex. Multiple L-threads (including
684 L-threads running on other schedulers) can safely wait on a L-thread condition
691 Recursive locking is not supported with L-threads, attempts to take a lock
714 threads appearing in the peer ready queue can jump any backlog in the local
732 L-threads.
734 In a synthetic test with many threads sleeping and resuming then the measured
757 ``RTE_PER_LCORE`` macros being ported to L-threads might need some slight
785 made from caches on the NUMA node on which the threads creator is running.
817 ``rte_malloc()``. For creation of objects such as L-threads, which trigger the
833 backlog. The fewer threads that are waiting in the ready queue then the faster
836 In a naive L-thread application with N L-threads simply looping and yielding,
837 this backlog will always be equal to the number of L-threads, thus the cost of
840 This side effect can be mitigated by arranging for threads to be suspended and
865 no intention that L-threads running on different schedulers will migrate between
866 schedulers or synchronize with L-threads running on other schedulers, then
870 If there will be interaction between L-threads running on different schedulers,
871 then it is important that the starting of schedulers on different EAL threads
876 ``lthread_num_schedulers_set(n)``, where ``n`` is the number of EAL threads
879 started before beginning to schedule L-threads.
885 The function ``lthread_run()``, will not return until all threads running on
891 longer any running L-threads, neither function forces any running L-thread to
893 built into the application to ensure that L-threads complete in a timely
903 Porting legacy code to run on L-threads
907 L-threads if the considerations about differences in scheduling policy, and
923 here, it should be feasible to run the application with L-threads. If the
936 threads. Any kind of blocking system call, for example file or socket IO, is a
949 to handling threads wishing to make blocking calls, and then back again when
961 If the application design ensures that the contending L-threads will always
980 When the contending L-threads are running on the same scheduler then an
984 If the application design ensures that contending L-threads will always run
991 ``lthread_yield()`` inside the loop (n.b. if the contending L-threads might
1022 Some applications have threads with loops that contain no inherent
1073 L-threads, before investing effort in porting to the native L-thread APIs.
1085 order to create the EAL threads in which the L-thread schedulers will run.
1129 least some minimal adjustment and recompilation to run on L-threads so
1164 When debugging you must take account of the fact that the L-threads are run in
1202 main EAL thread after all worker threads have stopped and returned to the C