| /dpdk/examples/vm_power_manager/ |
| H A D | oob_monitor_x86.c | 33 apply_policy(int core) in apply_policy() argument 55 core); in apply_policy() 64 core); in apply_policy() 159 core); in add_core_to_monitor() 171 core); in add_core_to_monitor() 180 core); in add_core_to_monitor() 193 core); in add_core_to_monitor() 225 core); in remove_core_from_monitor() 234 core); in remove_core_from_monitor() 243 core); in remove_core_from_monitor() [all …]
|
| H A D | oob_monitor_nop.c | 14 apply_policy(__rte_unused int core) in apply_policy() argument 20 add_core_to_monitor(__rte_unused int core) in add_core_to_monitor() argument 26 remove_core_from_monitor(__rte_unused int core) in remove_core_from_monitor() argument
|
| H A D | oob_monitor.h | 49 int add_core_to_monitor(int core); 61 int remove_core_from_monitor(int core);
|
| /dpdk/examples/service_cores/ |
| H A D | main.c | 128 uint32_t core = i + core_off; in apply_profile() local 129 ret = rte_service_lcore_add(core); in apply_profile() 131 printf("core %d added ret %d\n", core, ret); in apply_profile() 133 ret = rte_service_lcore_start(core); in apply_profile() 135 printf("core %d start ret %d\n", core, ret); in apply_profile() 138 if (rte_service_map_lcore_set(s, core, in apply_profile() 140 printf("failed to map lcore %d\n", core); in apply_profile() 145 uint32_t core = i + core_off; in apply_profile() local 147 ret = rte_service_map_lcore_set(s, core, 0); in apply_profile() 153 ret = rte_service_lcore_stop(core); in apply_profile() [all …]
|
| /dpdk/doc/guides/sample_app_ug/ |
| H A D | packet_ordering.rst | 15 * RX core (main core) receives traffic from the NIC ports and feeds Worker 18 * Worker (worker core) basically do some light work on the packet. 22 * TX Core (worker core) receives traffic from Worker cores through software queues, 50 The first CPU core in the core mask is the main core and would be assigned to 51 RX core, the last to TX core and the rest to Worker cores.
|
| H A D | test_pipeline.rst | 15 * Core A ("RX core") receives traffic from the NIC ports and feeds core B with traffic through SW… 17 * Core B ("Pipeline core") implements a single-table DPDK pipeline 19 Core B receives traffic from core A through software queues, 21 are hit by the input packets and feeds it to core C through another set of software queues. 23 * Core C ("TX core") receives traffic from core B through software queues and sends it to the NIC… 51 The first CPU core in the core mask is assigned for core A, the second for core B and the third for… 104 …| | At run time, core A is creating the f… 106 … | | core B to use for table … 127 …| | At run time, core A is creating the f… 128 … | key and storing it into the packet meta data for core | [all …]
|
| H A D | service_cores.rst | 32 pass a service core-mask as an EAL argument at startup time. 44 service core counts and mappings at runtime. 66 This section demonstrates how to add a service core. The ``rte_service.h`` 71 These are the functions to start a service core, and have it run a service: 82 To remove a service core, the steps are similar to adding but in reverse order. 83 Note that it is not allowed to remove a service core if the service is running, 84 and the service-core is the only core running that service (see documentation 92 is to abstract away hardware differences: the service core can CPU cycles to
|
| H A D | eventdev_pipeline.rst | 38 * ``-r1``: core mask 0x1 for RX 39 * ``-t1``: core mask 0x1 for TX 40 * ``-e4``: core mask 0x4 for the software scheduler 41 * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th 54 (e.g.; the RX core) which doesn't have a cpu core mask assigned, the application 60 pipeline, please check core masks (use -h for details on setting core masks):
|
| H A D | vm_power_management.rst | 80 the physical core mapping 321 Enable query of physical core information from a VM: 389 Get the current frequency for the specified core: 775 to adjust the frequency of a core, 867 between branch hits and misses on a core 869 a core is doing. 894 The core to which to apply a power command. 898 A valid core ID for the VM or host OS. 914 - SCALE_UP: Scale up the frequency of this core. 915 - SCALE_DOWN: Scale down the frequency of this core. [all …]
|
| H A D | keep_alive.rst | 10 of this failure. Its purpose is to ensure the failure of the core 24 pings within the specified time interval. When a core is Dead, a 25 callback function is invoked to restart the packet processing core; 27 higher level fault management entity of the core failure in order to 31 or agent to supervise the Keep Alive Monitor Agent Core DPDK core is required 128 The rte_keepalive_mark_alive function simply sets the core state to alive.
|
| /dpdk/app/test/ |
| H A D | test_stack_perf.c | 37 unsigned int core[2]; in get_two_hyperthreads() local 44 core[0] = rte_lcore_to_cpu_id(id[0]); in get_two_hyperthreads() 45 core[1] = rte_lcore_to_cpu_id(id[1]); in get_two_hyperthreads() 48 if ((core[0] == core[1]) && (socket[0] == socket[1])) { in get_two_hyperthreads() 63 unsigned int core[2]; in get_two_cores() local 70 core[0] = rte_lcore_to_cpu_id(id[0]); in get_two_cores() 71 core[1] = rte_lcore_to_cpu_id(id[1]); in get_two_cores() 74 if ((core[0] != core[1]) && (socket[0] == socket[1])) { in get_two_cores()
|
| /dpdk/doc/guides/prog_guide/ |
| H A D | service_cores.rst | 8 performing work on DPDK lcores. Service core support is built into the EAL, and 14 running services). The power of the service core concept is that the mapping 23 For detailed information about the service core API, please refer to the docs. 38 Each registered service can be individually mapped to a service core, or set of 39 service cores. Enabling a service on a particular core means that the lcore in 40 question will run the service. Disabling that core on the service stops the 44 service core, and map N workloads to M number of service cores. Each service 45 lcore loops over the services that are enabled for that core, and invokes the 51 The service core library is capable of collecting runtime statistics like number
|
| H A D | timer_lib.rst | 14 * Timers can be loaded from one core and executed on another. It has to be specified in the call … 16 …pends on the call frequency to rte_timer_manage() that checks timer expiration for the local core). 30 with all pending timers for a core being maintained in order of timer expiry in a skiplist data str… 34 This means that adding and removing entries from the timer list for a core can be done in log(n) ti… 43 * CONFIG: owned by a core, must not be modified by another core, maybe in a list or not, dependin… 45 * PENDING: owned by a core, present in a list 47 * RUNNING: owned by a core, must not be modified by another core, present in a list 57 the expiry time of the first list entry is maintained within the per-core timer list structure itse… 63 …age() returns without taking a lock in the case where the timer list for the calling core is empty.
|
| H A D | power_man.rst | 65 Per-core Turbo Boost 68 Individual cores can be allowed to enter a Turbo Boost state on a per-core 71 core. 78 So even though an application may request a scale down, the core frequency will 119 a core is hugely important for the following reasons: 139 The less the number of empty polls, means current core is busy with processing 141 indicates the current core not doing any real work therefore, we can lower the 144 In the current implementation, each core has 1 empty-poll counter which assume 146 support multiple queues per core. 219 scale the core frequency up/down depending on traffic volume. [all …]
|
| H A D | ring_lib.rst | 244 In the figure, the operation succeeded on core 1, and step one restarted on core 2. 257 The CAS operation is retried on core 2 with success. 259 The core 1 updates one element of the ring(obj4), and the core 2 updates another one (obj5). 272 Each core now wants to update ring->prod_tail. 273 A core can only update it if ring->prod_tail is equal to the prod_head local variable. 274 This is only true on core 1. The operation is finished on core 1. 287 Once ring->prod_tail is updated by core 1, core 2 is allowed to update it too. 288 The operation is also finished on core 2. 370 per core) this is usually the most suitable and fastest synchronization mode. 483 /* Pkt I/O core polls packets from the NIC */
|
| H A D | mempool_lib.rst | 12 It provides some other optional services such as a per-core object cache and 80 the memory pool allocator can maintain a per-core cache and do bulk requests to the memory pool's r… 82 In this way, each core has full access to its own cache (with locks) of free objects and 83 only when the cache fills does the core need to shuffle some of the free objects back to the pools … 86 While this may mean a number of buffers may sit idle on some core's cache, 87 the speed at which a core can access its own cache for a specific memory pool without locks provide… 89 The cache is composed of a small, per-core table of pointers and its length (used as a stack).
|
| /dpdk/.ci/ |
| H A D | linux-build.sh | 43 sudo sysctl -w kernel.core_pattern=/tmp/dpdk-core.%e.%p 47 ls /tmp/dpdk-core.*.* 2>/dev/null || return 0 48 for core in /tmp/dpdk-core.*.*; do 49 binary=$(sudo readelf -n $core |grep $(pwd)/build/ 2>/dev/null |head -n1) 51 sudo gdb $binary -c $core \
|
| /dpdk/usertools/ |
| H A D | cpu_layout.py | 18 core = int(fd.read()) variable 23 if core not in cores: 24 cores.append(core) 27 key = (socket, core)
|
| /dpdk/doc/guides/cryptodevs/ |
| H A D | scheduler.rst | 50 to be allocated (by default, socket_id will be the socket where the core 107 the throughput gap between the physical core and the existing cryptodevs 151 *Initialization mode parameter*: **multi-core** 153 Multi-core mode, which distributes the workload with several (up to eight) 157 For pure small packet size (64 bytes) traffic however the multi-core mode is not 158 an optimal solution, as it doesn't give significant per-core performance improvement. 165 The multi-core mode uses one extra parameter: 169 These cores should be present in EAL core list parameter and 174 --vdev "crypto_scheduler,worker=aesni_mb_1,worker=aesni_mb_2,mode=multi-core,corelist=23;24" ...
|
| /dpdk/doc/guides/linux_gsg/ |
| H A D | eal_args.include.rst | 7 * ``-c <core mask>`` 11 * ``-l <core list>`` 16 where ``c1``, ``c2``, etc are core indexes between 0 and 128. 18 * ``--lcores <core map>`` 33 At a given instance only one core option ``--lcores``, ``-l`` or ``-c`` can 36 * ``--main-lcore <core ID>`` 40 * ``-s <service core mask>``
|
| H A D | build_sample_apps.rst | 43 An hexadecimal bit mask of the cores to run on. Note that core numbering can 45 a set of core numbers instead of a bitmap core mask. 130 Each bit of the mask corresponds to the equivalent logical core number as reported by Linux. The pr… 131 Since these logical core numbers, and their mapping to specific cores on specific NUMA sockets, can… 132 it is recommended that the core layout for each platform be considered when choosing the coremask/c… 141 A more graphical view of the logical core layout 154 …The logical core layout can change between different board layouts and should be checked before se…
|
| /dpdk/doc/guides/eventdevs/ |
| H A D | sw.rst | 8 wide range of the eventdev features. The eventdev relies on a CPU core to 9 perform event scheduling. This PMD can use the service core library to run the 11 cores to multiplex other work on the same core if required. 55 schedule in a single schedule call performed by the service core. Note that 139 The software eventdev is a centralized scheduler, requiring a service core to 152 This allows a core to wait for an event to arrive, or until ``timeout`` number
|
| /dpdk/doc/guides/windows_gsg/ |
| H A D | run_apps.rst | 85 hello from core 1 86 hello from core 3 87 hello from core 0 88 hello from core 2
|
| /dpdk/doc/guides/freebsd_gsg/ |
| H A D | install_from_ports.rst | 75 official DPDK package from https://core.dpdk.org/download/ and install manually using 110 hello from core 1 111 hello from core 2 112 hello from core 3 113 hello from core 0
|
| /dpdk/doc/guides/tools/ |
| H A D | testregex.rst | 16 By default the test supports one QP per core, however a higher number of cores 19 (per core) - the enqueue/dequeue RegEx operations are interleaved as follows:: 31 The test outputs the following data per QP and core:
|