Home
last modified time | relevance | path

Searched refs:memory (Results 1 – 25 of 196) sorted by relevance

12345678

/dpdk/lib/eal/common/
H A Deal_common_dynmem.c230 uint64_t memory[RTE_MAX_NUMA_NODES]; in eal_dynmem_hugepage_init() local
401 total_size = internal_conf->memory; in eal_dynmem_calc_num_pages_per_socket()
414 memory[socket] = default_size; in eal_dynmem_calc_num_pages_per_socket()
430 memory[socket] += default_size; in eal_dynmem_calc_num_pages_per_socket()
437 total_size = internal_conf->memory; in eal_dynmem_calc_num_pages_per_socket()
450 memory[socket] = total_size; in eal_dynmem_calc_num_pages_per_socket()
469 memory[socket] -= cur_mem; in eal_dynmem_calc_num_pages_per_socket()
475 if (memory[socket] == 0) in eal_dynmem_calc_num_pages_per_socket()
497 if (remaining_mem < memory[socket]) { in eal_dynmem_calc_num_pages_per_socket()
500 memory[socket] -= cur_mem; in eal_dynmem_calc_num_pages_per_socket()
[all …]
/dpdk/doc/guides/linux_gsg/
H A Dbuild_sample_apps.rst48 Number of memory channels per processor socket.
60 this memory will also be pinned (i.e. not released back to the system until
65 legacy memory mode.
77 Number of memory ranks.
104 Run DPDK in legacy memory mode (disable memory reserve/unreserve at runtime,
105 but provide more IOVA-contiguous memory).
108 Store memory segments in fewer files (dynamic memory mode only - does not
109 affect legacy memory mode).
114 (assuming the platform has four memory channels per processor socket,
164 …ion itself can also fail if the user requests less memory than the reserved amount of hugepage-mem…
[all …]
H A Dlinux_eal_parameters.rst61 Use legacy DPDK memory allocation mode.
63 * ``--socket-mem <amounts of memory per socket>``
65 Preallocate specified amounts of memory per socket. The parameter is a
70 This will allocate 1 gigabyte of memory on socket 0, and 2048 megabytes of
71 memory on socket 1.
73 * ``--socket-limit <amounts of memory per socket>``
75 Place a per-socket upper limit on memory use (non-legacy memory mode only).
91 to ensure the kernel clears the memory and prevents any data leaks.
100 This makes restart faster by saving time to clear memory at initialization,
H A Deal_args.include.rst91 Attempt to use a different starting address for all memory maps of the
100 Set the number of memory channels to use.
104 Set the number of memory ranks (auto-detected by default).
108 Amount of memory to preallocate at startup.
110 * ``--in-memory``
112 Do not create any shared data structures and run entirely in memory. Implies
128 Use anonymous memory instead of hugepages (implies no secondary process
168 Specify maximum size of allocated memory for trace output for each thread.
H A Dnic_perf_intel_platform.rst15 Ensure that each memory channel has at least one memory DIMM inserted, and that the memory size for…
18 You can check the memory configuration using ``dmidecode`` as follows::
20 dmidecode -t memory | grep Locator
42 You can also use ``dmidecode`` to determine the memory frequency::
44 dmidecode -t memory | grep Speed
64 This aligns with the previous output which showed that each channel has one memory bar.
/dpdk/doc/guides/prog_guide/
H A Denv_abstraction_layer.rst90 The EAL provides an API to reserve named memory zones in this contiguous memory.
91 The physical address of the reserved memory for that memory zone is also returned to the user by th…
120 specified. This way, memory allocator will ensure that, whatever memory mode is
168 memory at startup, sort all memory into large IOVA-contiguous chunks, and will
205 much memory a DPDK application can have. DPDK memory is stored in segment lists,
218 memory type can address
227 memory! All DPDK processes preallocate virtual memory at startup. Hugepages
304 DPDK memory manager can provide file descriptors for memory segments,
342 allocated memory in DPDK. In this way, support for externally allocated memory
350 allocated memory for DMA is also performed on any memory segment that is added
[all …]
H A Dgpudev.rst12 it is possible to allocate a chunk of GPU memory and use it
14 on the GPU memory, enabling any network interface card
16 to directly transmit and receive packets using GPU memory.
36 - Allocate and free memory on the device.
41 using CPU memory visible from the GPU.
63 returning the pointer to that memory.
65 GPU memory allocated outside of the gpudev library
73 CPU memory registered outside of the gpudev library
90 into the GPU memory.
109 Considering an application with a GPU memory mempool
[all …]
H A Dwriting_efficient_code.rst63 On a NUMA system, it is preferable to access local memory since remote memory access is slower.
73 Modern memory controllers have several memory channels that can load or store data in parallel.
74 Depending on the memory controller and its configuration,
81 Locking memory pages
129 When PCI devices write to system memory through DMA,
192 the C11 memory model and provide finer memory order control.
204 incremented atomically but do not need any particular memory ordering.
205 So, RELAXED memory ordering is sufficient.
210 Some use cases allow for memory reordering in one way while requiring memory
220 store operation can use RELEASE memory order.
[all …]
H A Dmempool_lib.rst9 A memory pool is an allocator of a fixed-sized object.
46 The command line must always have the number of memory channels specified for the processor.
53 .. figure:: img/memory-management.*
65 .. figure:: img/memory-management2.*
77 In terms of CPU usage, the cost of multiple cores accessing a memory pool's ring of free buffers ma…
79 To avoid having too many access requests to the memory pool's ring,
80 the memory pool allocator can maintain a per-core cache and do bulk requests to the memory pool's r…
81 via the cache with many fewer locks on the actual memory pool structure.
112 This allows external memory subsystems, such as external hardware memory
113 management systems and software based memory allocators, to be used with DPDK.
[all …]
H A Dmulti_proc_support.rst16 each with different permissions on the hugepage memory used by the applications.
21 * secondary processes, which cannot initialize shared memory,
22 but can attach to pre- initialized shared memory and create objects in it.
26 after a primary process has already configured the hugepage shared memory for them.
55 the DPDK records to memory-mapped files the details of the memory configuration it is using - hugep…
58 in the secondary process so that all memory zones are shared between processes and all pointers to …
68 same switch specified. Otherwise, memory corruption may occur.
125 will attempt to preallocate all memory it can get to, and memory use must be
142 as the primary process whose shared memory they are connecting to.
277 memory must be freed by the requestor after request completes!
[all …]
H A Drcu_lib.rst13 In the following sections, the term "memory" refers to memory allocated
15 memory, for example an index of a free element array.
19 an element from a data structure, the writers cannot return the memory
21 referencing that element/memory anymore. Hence, it is required to
25 the data structure but does not return the associated memory to the
29 #. Free (Reclaim): in this step, the writer returns the memory to the
34 memory by making use of thread Quiescent State (QS).
71 Since memory is not freed immediately, there might be a need for
78 identifying the end of grace period and subsequent freeing of memory,
83 Polling introduces memory accesses and wastes CPU cycles. The memory
[all …]
H A Dvhost_lib.rst12 * Access the guest memory:
69 not pre-fault the guest shared memory, otherwise migration would fail.
204 Frees the memory and vhost-user message handlers created in
339 Guest memory requirement
344 For non-async data path, guest memory pre-allocation is not a
345 must. This can help save of memory. If users really want the guest memory
348 side which will force memory to be allocated when mmap at vhost side;
352 lib when mapping the guest memory; and also we need to lock the memory to
358 a QEMU version without shared memory mapping.
428 Vhost asynchronous data path leverages DMA devices to offload memory
[all …]
H A Dasan.rst9 is a widely-used debugging tool to detect memory access errors.
35 ASan is aware of DPDK memory allocations, thanks to added instrumentation, and
52 … heap-buffer-overflow error if ASan is enabled, because apply 9 bytes of memory but access the ten…
64 - Some of the features of ASan (for example, 'Display memory application location, currently
81 Above code will result in use-after-free error if ASan is enabled, because apply 9 bytes of memory
/dpdk/doc/guides/gpus/features/
H A Dcuda.ini8 Share CPU memory with device = Y
9 Allocate device memory = Y
10 Free memory = Y
11 CPU map device memory = Y
12 CPU unmap device memory = Y
H A Ddefault.ini11 Share CPU memory with device =
12 Allocate device memory =
13 Free memory =
14 CPU map device memory =
15 CPU unmap device memory =
/dpdk/doc/guides/gpus/
H A Dcuda.rst32 CPU map GPU memory
77 CPU map GPU memory
135 GPU memory management
147 - Allocate memory on the GPU.
148 - Register CPU memory to make it visible from GPU.
160 using GPU memory instead of additional memory copies through the CPU system memory.
193 and is enhanced with GPU memory managed through gpudev library
200 - Allocate memory on GPU device using gpudev library.
201 - Use that memory to create an external GPU memory mempool.
202 - Receive packets directly in GPU memory.
[all …]
/dpdk/doc/guides/faq/
H A Dfaq.rst4 What does "EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory" mean?
23 These are then placed in memory segments to get contiguous memory.
34 To request memory to be reserved on a specific socket, please use the --socket-mem command-line par…
41 If your system has a lot (>1 GB size) of hugepage memory, not all of it will be allocated.
105 Without NUMA enabled, memory is allocated from both sockets, since memory is interleaved.
106 Therefore, each 64B chunk is interleaved across both memory domains.
109 If you allocated 256B, you would get memory that looks like this:
119 …descriptor rings are allocated from both memory domains, thus incurring QPI bandwidth accessing th…
154 The second role of IOMMU is to allow protection from unwanted memory access by an unsafe device tha…
174 …ata between DPDK processes and regular userspace processes via some shared memory or IPC mechanism?
[all …]
/dpdk/lib/table/
H A Drte_table_hash_cuckoo.c43 uint8_t memory[0] __rte_cache_aligned; member
174 existing_entry = &t->memory[pos * t->entry_size]; in rte_table_hash_cuckoo_entry_add()
189 new_entry = &t->memory[pos * t->entry_size]; in rte_table_hash_cuckoo_entry_add()
216 uint8_t *entry_ptr = &t->memory[pos * t->entry_size]; in rte_table_hash_cuckoo_entry_delete()
221 memset(&t->memory[pos * t->entry_size], 0, t->entry_size); in rte_table_hash_cuckoo_entry_delete()
263 entries[i] = &t->memory[positions[i] in rte_table_hash_cuckoo_lookup()
282 entries[i] = &t->memory[pos in rte_table_hash_cuckoo_lookup()
H A Drte_table_acl.c48 uint8_t memory[0] __rte_cache_aligned; member
107 acl->action_table = &acl->memory[0]; in rte_table_acl_create()
109 (struct rte_acl_rule **) &acl->memory[action_table_size]; in rte_table_acl_create()
111 &acl->memory[action_table_size + acl_rule_list_size]; in rte_table_acl_create()
286 *entry_ptr = &acl->memory[i * acl->entry_size]; in rte_table_acl_entry_add()
324 *entry_ptr = &acl->memory[free_pos * acl->entry_size]; in rte_table_acl_entry_add()
406 memcpy(entry, &acl->memory[pos * acl->entry_size], in rte_table_acl_entry_delete()
518 entries_ptr[i] = &acl->memory[j * acl->entry_size]; in rte_table_acl_entry_add_bulk()
586 entries_ptr[i] = &acl->memory[rule_pos[i] * acl->entry_size]; in rte_table_acl_entry_add_bulk()
706 memcpy(entries[i], &acl->memory[rule_pos[i] * acl->entry_size], in rte_table_acl_entry_delete_bulk()
[all …]
H A Drte_swx_table_em.c376 uint8_t *memory; in __table_create() local
423 memory = env_malloc(total_size, RTE_CACHE_LINE_SIZE, numa_node); in __table_create()
424 CHECK(memory, ENOMEM); in __table_create()
425 memset(memory, 0, total_size); in __table_create()
428 t = (struct table *)memory; in __table_create()
439 t->key_mask = &memory[key_mask_offset]; in __table_create()
440 t->buckets = (struct bucket_extension *)&memory[bucket_offset]; in __table_create()
442 t->keys = &memory[key_offset]; in __table_create()
443 t->key_stack = (uint32_t *)&memory[key_stack_offset]; in __table_create()
444 t->bkt_ext_stack = (uint32_t *)&memory[bkt_ext_stack_offset]; in __table_create()
[all …]
/dpdk/doc/guides/platform/
H A Dmlx5.rst28 this driver only handles virtual memory addresses.
31 ensure that DPDK applications cannot access random physical memory
32 (or memory that does not belong to the current process).
285 while 1 is a regular memory mapping.
287 With regular memory mapping, the register is flushed to HW
425 For DMA memory pinning.
580 entire memory is freed.
599 A non-zero value enables the PMD memory management allocating memory
600 from system by default, without explicit rte memory flag.
626 cached memory, the PMD will not perform the extra write memory barrier after
[all …]
/dpdk/doc/guides/sample_app_ug/
H A Dmulti_process.rst34 two DPDK processes can work together using queues and memory pools to share information.
48 meaning they have control over the hugepage shared memory regions.
111 …of this example application is based on using two queues and a single memory pool in shared memory.
113 since the secondary process cannot create objects in memory as it cannot reserve memory zones,
124 Once the rings and memory pools are all available in both the primary and secondary processes,
127 and frees the buffer space used by the messages back to the memory pool.
194 once the hugepage shared memory and the network ports are initialized,
210 The structures for the initialized network ports are stored in shared memory and
221 Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name:
289 …t the server process stores its port configuration data in a memory zone in hugepage shared memory.
[all …]
/dpdk/doc/guides/nics/
H A Dmemif.rst8 Shared memory packet interface (memif) PMD allows for DPDK and any other client
9 using memif (DPDK, VPP, libmemif) to communicate using shared memory. Memif is
17 existing socket. It is also a producer of shared memory file and initializes
18 the shared memory. Each interface can be connected to one peer interface
94 Shared memory
97 **Shared memory format**
99 Client is producer and server is consumer. Memory regions, are mapped shared memory files,
102 regions. For no-zero-copy, rings and buffers are stored inside single memory
156 Index of memory region, the buffer is located in.
164 Data start offset from memory region address. *.regions[desc->region].addr + desc->offset*
[all …]
/dpdk/doc/guides/testpmd_app_ug/
H A Drun_app.rst95 Enable NUMA-aware allocation of RX/TX rings and of RX memory buffers
113 Set the socket from which all memory is allocated in NUMA mode,
173 Set Flow Director allocated memory size, where N is 64K, 128K or 256K.
323 Set the cache of mbuf memory pools to N, where 0 <= N <= 512.
388 sequentially from these extra memory pools.
489 Enable locking all memory.
493 Disable locking all memory.
499 * native: create and populate mempool using native DPDK memory
500 * anon: create mempool using native DPDK memory, but populate using
501 anonymous memory
[all …]
/dpdk/doc/guides/windows_gsg/
H A Drun_apps.rst7 Grant *Lock pages in memory* Privilege
18 2. Open *Local Policies / User Rights Assignment / Lock pages in memory.*
27 .. _Large-Page Support: https://docs.microsoft.com/en-us/windows/win32/memory/large-page-support
49 It is mandatory for allocating physically-contiguous memory which is required

12345678