| 64fcadea | 03-May-2022 |
Thomas Monjalon <[email protected]> |
avoid AltiVec keyword vector
The AltiVec header file is defining "vector", except in C++ build. The keyword "vector" may conflict easily. As a rule, it is better to use the alternative keyword "__ve
avoid AltiVec keyword vector
The AltiVec header file is defining "vector", except in C++ build. The keyword "vector" may conflict easily. As a rule, it is better to use the alternative keyword "__vector", so we will be able to #undef vector after including AltiVec header.
Later it may become possible to #undef vector in rte_altivec.h with a compatibility breakage.
Signed-off-by: Thomas Monjalon <[email protected]> Reviewed-by: David Christensen <[email protected]>
show more ...
|
| 36edf3cc | 20-Jul-2021 |
Rakesh Kudurumalla <[email protected]> |
test: avoid hang if queues are full and Tx fails
Current pmd_perf_autotest() in continuous mode tries to enqueue MAX_TRAFFIC_BURST completely before starting the test. Some drivers cannot accept com
test: avoid hang if queues are full and Tx fails
Current pmd_perf_autotest() in continuous mode tries to enqueue MAX_TRAFFIC_BURST completely before starting the test. Some drivers cannot accept complete MAX_TRAFFIC_BURST even though rx+tx desc count can fit it. This patch changes behaviour to stop enqueuing after few retries.
Fixes: 002ade70e933 ("app/test: measure cycles per packet in Rx/Tx") Cc: [email protected]
Signed-off-by: Rakesh Kudurumalla <[email protected]>
show more ...
|
| 0354e8e8 | 29-Apr-2022 |
Elena Agostini <[email protected]> |
gpu/cuda: unmap GPU memory while freeing
Enable GPU_REGISTERED flag in gpu/cuda driver in the memory list. If a GPU memory address CPU mapped is freed before being unmapped, CUDA driver unmaps it be
gpu/cuda: unmap GPU memory while freeing
Enable GPU_REGISTERED flag in gpu/cuda driver in the memory list. If a GPU memory address CPU mapped is freed before being unmapped, CUDA driver unmaps it before freeing the memory.
Signed-off-by: Elena Agostini <[email protected]>
show more ...
|
| 2f51bc9c | 20-May-2022 |
David Marchand <[email protected]> |
eal/freebsd: fix use of newer cpuset macros
FreeBSD has updated its CPU macros to align more with the definitions used on Linux[1]. Unfortunately, while this makes compatibility better in future, it
eal/freebsd: fix use of newer cpuset macros
FreeBSD has updated its CPU macros to align more with the definitions used on Linux[1]. Unfortunately, while this makes compatibility better in future, it means we need to have both legacy and newer definition support. Use a meson check to determine which set of macros are used.
[1] https://cgit.freebsd.org/src/commit/?id=e2650af157bc
Bugzilla ID: 1014 Fixes: c3568ea37670 ("eal: restrict control threads to startup CPU affinity") Fixes: b6be16acfeb1 ("eal: fix control thread affinity with --lcores") Cc: [email protected]
Signed-off-by: David Marchand <[email protected]> Signed-off-by: Bruce Richardson <[email protected]> Tested-by: Daxue Gao <[email protected]>
show more ...
|
| 981a0257 | 11-May-2022 |
Stanislaw Kardach <[email protected]> |
test/ring: remove excessive inlining
Forcing inlining in test_ring_enqueue and test_ring_dequeue can cause the compiled code to grow extensively when compiled with no optimization (-O0 or -Og). This
test/ring: remove excessive inlining
Forcing inlining in test_ring_enqueue and test_ring_dequeue can cause the compiled code to grow extensively when compiled with no optimization (-O0 or -Og). This is default in the meson's debug configuration. This can collide with compiler bugs and cause issues during linking of unit tests where the api_type or esize are non-const variables causing inlining cascade. In perf tests this is not the case in perf-tests as esize and api_type are const values.
One such case was discovered when porting DPDK to RISC-V. GCC 11.2 (and no fix still in 12.1) is generating a short relative jump instruction (J <offset>) for goto and for loops. When loop body grows extensively in ring test, the target offset goes beyond supported offfset of +/- 1MB from PC. This is an obvious bug in the GCC as RISC-V has a two-instruction construct to jump to any absolute address (AUIPC+JALR).
However there is no reason to force inlining as the test code works perfectly fine without it.
GCC has a bug report for a similar case (with conditionals): https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93062
Fixes: a9fe152363e2 ("test/ring: add custom element size functional tests")
Signed-off-by: Stanislaw Kardach <[email protected]> Acked-by: Bruce Richardson <[email protected]> Reviewed-by: Honnappa Nagarahalli <[email protected]> Acked-by: Konstantin Ananyev <[email protected]>
show more ...
|
| a137eb2b | 11-May-2022 |
Stanislaw Kardach <[email protected]> |
examples/l3fwd: fix scalar LPM
The lpm_process_event_pkt() can either process a packet using an architecture specific (defined for X86/SSE, ARM/Neon and PPC64/Altivec) path or a scalar one. The choi
examples/l3fwd: fix scalar LPM
The lpm_process_event_pkt() can either process a packet using an architecture specific (defined for X86/SSE, ARM/Neon and PPC64/Altivec) path or a scalar one. The choice is however done using an ifdef pre-processor macro. Because of that the scalar version was apparently not widely exercised/compiled. Due to some copy/paste errors, the scalar logic in lpm_process_event_pkt() retained a "continue" statement where it should utilize rfc1812_process() and return the port/BAD_PORT.
Fixes: 99fc91d18082 ("examples/l3fwd: add event lpm main loop")
Signed-off-by: Stanislaw Kardach <[email protected]> Reviewed-by: David Marchand <[email protected]>
show more ...
|
| 26d734b5 | 14-Apr-2022 |
David Marchand <[email protected]> |
devargs: fix leak on hotplug failure
Caught by ASan, if a secondary process tried to attach a device with an incorrect driver name, devargs was leaked.
Fixes: 64051bb1f144 ("devargs: unify scratch
devargs: fix leak on hotplug failure
Caught by ASan, if a secondary process tried to attach a device with an incorrect driver name, devargs was leaked.
Fixes: 64051bb1f144 ("devargs: unify scratch buffer storage") Cc: [email protected]
Signed-off-by: David Marchand <[email protected]>
show more ...
|
| 00901e4d | 25-Feb-2022 |
Luc Pelletier <[email protected]> |
eal/x86: fix unaligned access for small memcpy
Calls to rte_memcpy for 1 < n < 16 could result in unaligned loads/stores, which is undefined behaviour according to the C standard, and strict aliasin
eal/x86: fix unaligned access for small memcpy
Calls to rte_memcpy for 1 < n < 16 could result in unaligned loads/stores, which is undefined behaviour according to the C standard, and strict aliasing violations.
The code was changed to use a packed structure that allows aliasing (using the __may_alias__ attribute) to perform the load/store operations. This results in code that has the same performance as the original code and that is also C standards-compliant.
Fixes: af75078fece3 ("first public release") Cc: [email protected]
Signed-off-by: Luc Pelletier <[email protected]> Acked-by: Konstantin Ananyev <[email protected]> Tested-by: Konstantin Ananyev <[email protected]>
show more ...
|
| 04e53de9 | 12-May-2022 |
Tyler Retzlaff <[email protected]> |
test/threads: add unit test
Establish unit test for testing thread api. Initial unit tests for rte_thread_{get,set}_affinity_by_id().
Signed-off-by: Narcisa Vasile <[email protected]> Si
test/threads: add unit test
Establish unit test for testing thread api. Initial unit tests for rte_thread_{get,set}_affinity_by_id().
Signed-off-by: Narcisa Vasile <[email protected]> Signed-off-by: Tyler Retzlaff <[email protected]>
show more ...
|
| b70a9b78 | 12-May-2022 |
Tyler Retzlaff <[email protected]> |
eal: get/set thread affinity per thread identifier
Implement functions for getting/setting thread affinity. Threads can be pinned to specific cores by setting their affinity attribute.
Windows erro
eal: get/set thread affinity per thread identifier
Implement functions for getting/setting thread affinity. Threads can be pinned to specific cores by setting their affinity attribute.
Windows error codes are translated to errno-style error codes. The possible return values are chosen so that we have as much semantic compatibility between platforms as possible.
note: convert_cpuset_to_affinity has a limitation that all cpus of the set belong to the same processor group.
Signed-off-by: Narcisa Vasile <[email protected]> Signed-off-by: Tyler Retzlaff <[email protected]> Acked-by: Dmitry Kozlyuk <[email protected]>
show more ...
|
| 56539289 | 12-May-2022 |
Tyler Retzlaff <[email protected]> |
eal: provide current thread identifier
Provide a portable type-safe thread identifier. Provide rte_thread_self for obtaining current thread identifier.
Signed-off-by: Narcisa Vasile <navasile@linux
eal: provide current thread identifier
Provide a portable type-safe thread identifier. Provide rte_thread_self for obtaining current thread identifier.
Signed-off-by: Narcisa Vasile <[email protected]> Signed-off-by: Tyler Retzlaff <[email protected]> Acked-by: Dmitry Kozlyuk <[email protected]> Acked-by: Konstantin Ananyev <[email protected]>
show more ...
|
| f80ae1aa | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
hash: unify CRC32 selection for x86 and Arm
Merge CRC32 hash calculation public API implementation for x86 and Arm. Select the best available CRC32 algorithm when unsupported algorithm on a given CP
hash: unify CRC32 selection for x86 and Arm
Merge CRC32 hash calculation public API implementation for x86 and Arm. Select the best available CRC32 algorithm when unsupported algorithm on a given CPU architecture is requested by an application.
Previously, if an application directly includes `rte_crc_arm64.h` without including `rte_hash_crc.h` it will fail to compile.
Signed-off-by: Pavan Nikhilesh <[email protected]> Reviewed-by: Ruifeng Wang <[email protected]>
show more ...
|
| 3011c5a4 | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
hash: split x86 and SW hash CRC intrinsics
Split x86 and SW hash crc intrinsics into separate files.
Signed-off-by: Pavan Nikhilesh <[email protected]> Reviewed-by: Ruifeng Wang <ruifeng.wan
hash: split x86 and SW hash CRC intrinsics
Split x86 and SW hash crc intrinsics into separate files.
Signed-off-by: Pavan Nikhilesh <[email protected]> Reviewed-by: Ruifeng Wang <[email protected]> Acked-by: Yipeng Wang <[email protected]>
show more ...
|
| 68c05095 | 16-May-2022 |
Shijith Thotton <[email protected]> |
event/cnxk: flush event queues over multiple pass
If an event queue flush does not complete after a fixed number of tries, remaining queues are flushed before retrying the one with incomplete flush.
event/cnxk: flush event queues over multiple pass
If an event queue flush does not complete after a fixed number of tries, remaining queues are flushed before retrying the one with incomplete flush.
Signed-off-by: Shijith Thotton <[email protected]>
show more ...
|
| 7da7925f | 16-May-2022 |
Shijith Thotton <[email protected]> |
event/cnxk: support setting queue attributes at runtime
Added API to set queue attributes at runtime and API to get weight and affinity.
Signed-off-by: Shijith Thotton <[email protected]> Acked-
event/cnxk: support setting queue attributes at runtime
Added API to set queue attributes at runtime and API to get weight and affinity.
Signed-off-by: Shijith Thotton <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| be541d37 | 16-May-2022 |
Pavan Nikhilesh <[email protected]> |
common/cnxk: lock when accessing mbox of SSO
Since mailbox is now accessed from multiple threads, use lock to synchronize access.
Signed-off-by: Pavan Nikhilesh <[email protected]> Signed-of
common/cnxk: lock when accessing mbox of SSO
Since mailbox is now accessed from multiple threads, use lock to synchronize access.
Signed-off-by: Pavan Nikhilesh <[email protected]> Signed-off-by: Shijith Thotton <[email protected]>
show more ...
|
| deb450c4 | 16-May-2022 |
Shijith Thotton <[email protected]> |
test/event: set queue attributes at runtime
Added test cases to test changing of queue QoS attributes priority, weight and affinity at runtime.
Signed-off-by: Shijith Thotton <[email protected]>
test/event: set queue attributes at runtime
Added test cases to test changing of queue QoS attributes priority, weight and affinity at runtime.
Signed-off-by: Shijith Thotton <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| 44516e6b | 16-May-2022 |
Shijith Thotton <[email protected]> |
eventdev: add weight and affinity to queue attributes
Extended eventdev queue QoS attributes to support weight and affinity. If queues are of the same priority, events from the queue with highest we
eventdev: add weight and affinity to queue attributes
Extended eventdev queue QoS attributes to support weight and affinity. If queues are of the same priority, events from the queue with highest weight will be scheduled first. Affinity indicates the number of times, the subsequent schedule calls from an event port will use the same event queue. Schedule call selects another queue if current queue goes empty or schedule count reaches affinity count.
To avoid ABI break, weight and affinity attributes are not yet added to queue config structure and rely on PMD for managing it. New eventdev op queue_attr_get can be used to get it from the PMD.
Signed-off-by: Shijith Thotton <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| 97b914f4 | 16-May-2022 |
Shijith Thotton <[email protected]> |
eventdev: support setting queue attributes at runtime
Added a new eventdev API rte_event_queue_attr_set(), to set event queue attributes at runtime from the values set during initialization using rt
eventdev: support setting queue attributes at runtime
Added a new eventdev API rte_event_queue_attr_set(), to set event queue attributes at runtime from the values set during initialization using rte_event_queue_setup(). PMD's supporting this feature should expose the capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
Signed-off-by: Shijith Thotton <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| e8594de2 | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
event/cnxk: implement event port quiesce function
Implement event port quiesce function to clean up any lcore resources used.
Signed-off-by: Pavan Nikhilesh <[email protected]> |
| aae4f5e0 | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
examples: use event port quiescing
Quiesce event ports used by the workers core on exit to free up any outstanding resources.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jer
examples: use event port quiescing
Quiesce event ports used by the workers core on exit to free up any outstanding resources.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| 7da008df | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
app/eventdev: use port quiescing
Quiesce event ports used by the workers core on exit to free up any outstanding resources.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jerin
app/eventdev: use port quiescing
Quiesce event ports used by the workers core on exit to free up any outstanding resources.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| 1ff23ce6 | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
eventdev: quiesce an event port
Add function to quiesce any core specific resources consumed by the event port.
When the application decides to migrate the event port to another lcore or teardown t
eventdev: quiesce an event port
Add function to quiesce any core specific resources consumed by the event port.
When the application decides to migrate the event port to another lcore or teardown the current lcore it may to call `rte_event_port_quiesce` to make sure that all the data associated with the event port are released from the lcore, this might also include any prefetched events.
While releasing the event port from the lcore, this function calls the user-provided flush callback once per event.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| 22bfcba4 | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
examples/ipsec-secgw: cleanup worker state before exit
Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker
examples/ipsec-secgw: cleanup worker state before exit
Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker core might still hold a scheduling context during exit as the next call to rte_event_dequeue_burst() is never made. This might lead to deadlock based on the worker exit timing and when there are very less number of flows.
Add a cleanup function to release any scheduling contexts held by the worker by using RTE_EVENT_OP_RELEASE.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|
| 622ebb6b | 13-May-2022 |
Pavan Nikhilesh <[email protected]> |
examples/l2fwd-event: clean up worker state before exit
Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker
examples/l2fwd-event: clean up worker state before exit
Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker core might still hold a scheduling context during exit, as the next call to rte_event_dequeue_burst() is never made. This might lead to deadlock based on the worker exit timing and when there are very less number of flows.
Add clean up function to release any scheduling contexts held by the worker by using RTE_EVENT_OP_RELEASE.
Signed-off-by: Pavan Nikhilesh <[email protected]> Acked-by: Jerin Jacob <[email protected]>
show more ...
|