|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5 |
|
| #
445a25f6 |
| 19-Apr-2024 |
Rahul Rameshbabu <[email protected]> |
net/mlx5e: Support updating coalescing configuration without resetting channels
When CQE mode or DIM state is changed, gracefully reconfigure channels to handle new configuration. Previously, would
net/mlx5e: Support updating coalescing configuration without resetting channels
When CQE mode or DIM state is changed, gracefully reconfigure channels to handle new configuration. Previously, would create new channels that would reflect the changes rather than update the original channels.
Co-developed-by: Nabil S. Alramli <[email protected]> Signed-off-by: Nabil S. Alramli <[email protected]> Co-developed-by: Joe Damato <[email protected]> Signed-off-by: Joe Damato <[email protected]> Signed-off-by: Rahul Rameshbabu <[email protected]> Signed-off-by: Tariq Toukan <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1 |
|
| #
31803e59 |
| 31-Mar-2020 |
Saeed Mahameed <[email protected]> |
net/mlx5: Use mlx5_cmd_do() in core create_{cq,dct}
mlx5_core_create_{cq/dct} functions are non-trivial mlx5 commands functions. They check command execution status themselves and hide valuable FW f
net/mlx5: Use mlx5_cmd_do() in core create_{cq,dct}
mlx5_core_create_{cq/dct} functions are non-trivial mlx5 commands functions. They check command execution status themselves and hide valuable FW failure information.
For mlx5_core/eth kernel user this is what we actually want, but for a devx/rdma user the hidden information is essential and should be propagated up to the caller, thus we convert these commands to use mlx5_cmd_do to return the FW/driver and command outbox status as is, and let the caller decide what to do with it.
For kernel callers of mlx5_core_create_{cq/dct} or those who only care about the binary status (FAIL/SUCCESS) they must check status themselves via mlx5_cmd_check() to restore the current behavior.
err = mlx5_create_cq(in, out) err = mlx5_cmd_check(err, in, out) if (err) // handle err
For DEVX users and those who care about full visibility, They will just propagate the error to user space, and app can check if err == -EREMOTEIO, then outbox.{status,syndrome} are valid.
API Note: mlx5_cmd_check() must be used by kernel users since it allows the driver to intercept the command execution status and return a driver simulated status in case of driver induced error handling or reset/recovery flows.
Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
| #
9205d7b1 |
| 26-Jun-2020 |
Parav Pandit <[email protected]> |
net/mlx5: Avoid RDMA file inclusion in core driver
mlx5 cq.h does not depend on RDMA verbs. Remove RDMA verbs file inclusion.
Signed-off-by: Parav Pandit <[email protected]> Signed-off-by: Saeed M
net/mlx5: Avoid RDMA file inclusion in core driver
mlx5 cq.h does not depend on RDMA verbs. Remove RDMA verbs file inclusion.
Signed-off-by: Parav Pandit <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
| #
d1f62050 |
| 09-Apr-2020 |
Leon Romanovsky <[email protected]> |
net/mlx5: Update cq.c to new cmd interface
Do mass update of cq.c to reuse newly introduced mlx5_cmd_exec_in*() interfaces.
Reviewed-by: Moshe Shemesh <[email protected]> Signed-off-by: Leon Roman
net/mlx5: Update cq.c to new cmd interface
Do mass update of cq.c to reuse newly introduced mlx5_cmd_exec_in*() interfaces.
Reviewed-by: Moshe Shemesh <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]>
show more ...
|
|
Revision tags: v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2 |
|
| #
4e0e2ea1 |
| 30-Jun-2019 |
Yishai Hadas <[email protected]> |
net/mlx5: Report EQE data upon CQ completion
Report EQE data upon CQ completion to let upper layers use this data.
Signed-off-by: Yishai Hadas <[email protected]> Acked-by: Saeed Mahameed <saeed
net/mlx5: Report EQE data upon CQ completion
Report EQE data upon CQ completion to let upper layers use this data.
Signed-off-by: Yishai Hadas <[email protected]> Acked-by: Saeed Mahameed <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]>
show more ...
|
| #
38164b77 |
| 30-Jun-2019 |
Yishai Hadas <[email protected]> |
net/mlx5: mlx5_core_create_cq() enhancements
Enhance mlx5_core_create_cq() to get the command out buffer from the callers to let them use the output.
Signed-off-by: Yishai Hadas <[email protected]
net/mlx5: mlx5_core_create_cq() enhancements
Enhance mlx5_core_create_cq() to get the command out buffer from the callers to let them use the output.
Signed-off-by: Yishai Hadas <[email protected]> Acked-by: Saeed Mahameed <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3 |
|
| #
bbf29f61 |
| 29-Mar-2019 |
Maxim Mikityanskiy <[email protected]> |
net/mlx5: Remove spinlock support from mlx5_write64
As there is no user of mlx5_write64 that passes a spinlock to mlx5_write64, remove this functionality and simplify the function.
Signed-off-by: M
net/mlx5: Remove spinlock support from mlx5_write64
As there is no user of mlx5_write64 that passes a spinlock to mlx5_write64, remove this functionality and simplify the function.
Signed-off-by: Maxim Mikityanskiy <[email protected]> Reviewed-by: Eran Ben Elisha <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
|
Revision tags: v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2 |
|
| #
939de57d |
| 05-Nov-2018 |
Daniel Jurgens <[email protected]> |
net/mlx5e: Use CQE padding for Ethernet CQs
Writing 64B CQEs to 128B cache lines results in a RMW operation. Padding the CQEs to 128B if possible improves performance on 128B cache line systems like
net/mlx5e: Use CQE padding for Ethernet CQs
Writing 64B CQEs to 128B cache lines results in a RMW operation. Padding the CQEs to 128B if possible improves performance on 128B cache line systems like PPC.
Testing on PPC showed up to a 24% improvement in small packet throughput vs the default behavior, depending on the workload and system topology.
Signed-off-by: Daniel Jurgens <[email protected]> Reviewed-by: Tariq Toukan <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
| #
16d76083 |
| 19-Nov-2018 |
Saeed Mahameed <[email protected]> |
net/mlx5: EQ, Different EQ types
In mlx5 we have three types of usages for EQs, 1. Asynchronous EQs, used internally by mlx5 core for a. FW command completions b. FW page requests c. one EQ for a
net/mlx5: EQ, Different EQ types
In mlx5 we have three types of usages for EQs, 1. Asynchronous EQs, used internally by mlx5 core for a. FW command completions b. FW page requests c. one EQ for all other Asynchronous events
2. Completion EQs, used for CQ completion (we create one per core)
3. *Special type of EQ (page fault) used for RDMA on demand paging (ODP).
*The 3rd type shouldn't be special at least in mlx5 core, it is yet another async events EQ with specific use case, it will be removed in the next two patches, and will completely move its logic to mlx5_ib, as it is rdma specific.
In this patch we remove use case (eq type) specific fields from struct mlx5_eq into a new eq type specific structures.
struct mlx5_eq_async; truct mlx5_eq_comp; struct mlx5_eq_pagefault;
Separate between their type specific flows.
In the future we will allow users to create there own generic EQs. for now we will allow only one for ODP in next patches.
We will introduce event listeners registration API for those who want to receive mlx5 async events. After that mlx5 eq handling will be clean from feature/user specific handling.
Signed-off-by: Saeed Mahameed <[email protected]> Reviewed-by: Leon Romanovsky <[email protected]> Reviewed-by: Tariq Toukan <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]>
show more ...
|
|
Revision tags: v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5 |
|
| #
9ba481e2 |
| 20-Sep-2018 |
Yishai Hadas <[email protected]> |
net/mlx5: Set uid as part of CQ commands
Set uid as part of CQ commands so that the firmware can manage the CQ object in a secured way.
The firmware should mark this CQ with the given uid so that i
net/mlx5: Set uid as part of CQ commands
Set uid as part of CQ commands so that the firmware can manage the CQ object in a secured way.
The firmware should mark this CQ with the given uid so that it can be used later on only by objects with the same uid.
Upon DEVX flows that use this CQ (e.g. create QP command), the pointed CQ must have the same uid as of the issuer uid command.
When a command is issued with uid=0 it means that the issuer of the command is trusted (i.e. kernel), in that case any pointed object can be used regardless of its uid.
Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]>
show more ...
|
|
Revision tags: v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6 |
|
| #
1acae6b0 |
| 31-Dec-2017 |
Eran Ben Elisha <[email protected]> |
mlx5: Move dump error CQE function out of mlx5_ib for code sharing
Move mlx5_ib dump error CQE implementation to mlx5 CQ header file in order to use it in a downstream patch from mlx5e.
In addition
mlx5: Move dump error CQE function out of mlx5_ib for code sharing
Move mlx5_ib dump error CQE implementation to mlx5 CQ header file in order to use it in a downstream patch from mlx5e.
In addition, use print_hex_dump instead of manual dumping of the buffer.
Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
| #
f105b45b |
| 01-Feb-2018 |
Saeed Mahameed <[email protected]> |
net/mlx5: CQ hold/put API
Now as the CQ table is per EQ, add an API to hold/put CQ to be used from eq.c in downstream patch.
Signed-off-by: Saeed Mahameed <[email protected]> Reviewed-by: Gal Pre
net/mlx5: CQ hold/put API
Now as the CQ table is per EQ, add an API to hold/put CQ to be used from eq.c in downstream patch.
Signed-off-by: Saeed Mahameed <[email protected]> Reviewed-by: Gal Pressman <[email protected]>
show more ...
|
| #
02d92f79 |
| 20-Jan-2018 |
Saeed Mahameed <[email protected]> |
net/mlx5: CQ Database per EQ
Before this patch the driver had one CQ database protected via one spinlock, this spinlock is meant to synchronize between CQ adding/removing and CQ IRQ interrupt handli
net/mlx5: CQ Database per EQ
Before this patch the driver had one CQ database protected via one spinlock, this spinlock is meant to synchronize between CQ adding/removing and CQ IRQ interrupt handling.
On a system with large number of CPUs and on a work load that requires lots of interrupts, this global spinlock becomes a very nasty hotspot and introduces a contention between the active cores, which will significantly hurt performance and becomes a bottleneck that prevents seamless cpu scaling.
To solve this we simply move the CQ database and its spinlock to be per EQ (IRQ), thus per core.
Tested with: system: 2 sockets, 14 cores per socket, hyperthreading, 2x14x2=56 cores netperf command: ./super_netperf 200 -P 0 -t TCP_RR -H <server> -l 30 -- -r 300,300 -o -s 1M,1M -S 1M,1M
WITHOUT THIS PATCH: Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle Average: all 4.32 0.00 36.15 0.09 0.00 34.02 0.00 0.00 0.00 25.41
Samples: 2M of event 'cycles:pp', Event count (approx.): 1554616897271 Overhead Command Shared Object Symbol + 14.28% swapper [kernel.vmlinux] [k] intel_idle + 12.25% swapper [kernel.vmlinux] [k] queued_spin_lock_slowpath + 10.29% netserver [kernel.vmlinux] [k] queued_spin_lock_slowpath + 1.32% netserver [kernel.vmlinux] [k] mlx5e_xmit
WITH THIS PATCH: Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle Average: all 4.27 0.00 34.31 0.01 0.00 18.71 0.00 0.00 0.00 42.69
Samples: 2M of event 'cycles:pp', Event count (approx.): 1498132937483 Overhead Command Shared Object Symbol + 23.33% swapper [kernel.vmlinux] [k] intel_idle + 1.69% netserver [kernel.vmlinux] [k] mlx5e_xmit
Tested-by: Song Liu <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Reviewed-by: Gal Pressman <[email protected]>
show more ...
|
|
Revision tags: v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1 |
|
| #
b0e9df6d |
| 13-Nov-2017 |
Yonatan Cohen <[email protected]> |
IB/mlx5: Exposing modify CQ callback to uverbs layer
Exposed mlx5_ib_modify_cq to be called from ib device verb list.
Signed-off-by: Yonatan Cohen <[email protected]> Reviewed-by: Majd Dibbiny
IB/mlx5: Exposing modify CQ callback to uverbs layer
Exposed mlx5_ib_modify_cq to be called from ib device verb list.
Signed-off-by: Yonatan Cohen <[email protected]> Reviewed-by: Majd Dibbiny <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
show more ...
|
|
Revision tags: v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6 |
|
| #
7a0c8f42 |
| 19-Oct-2017 |
Guy Levi <[email protected]> |
IB/mlx5: Support padded 128B CQE feature
In some benchmarks and some CPU architectures, writing the CQE on a full cache line size improves performance by saving memory access operations (read-modify
IB/mlx5: Support padded 128B CQE feature
In some benchmarks and some CPU architectures, writing the CQE on a full cache line size improves performance by saving memory access operations (read-modify-write) relative to partial cache line change. This patch lets the user to configure the device to pad the CQE up to 128B in case its content is less than 128B. Currently the driver supports only padding for a CQE size of 128B.
Signed-off-by: Guy Levi <[email protected]> Reviewed-by: Mark Bloch <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
show more ...
|
| #
a4b51a9f |
| 20-Oct-2017 |
Elena Reshetova <[email protected]> |
drivers, net, mlx5: convert mlx5_cq.refcount from atomic_t to refcount_t
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialize
drivers, net, mlx5: convert mlx5_cq.refcount from atomic_t to refcount_t
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.)
Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable.
The variable mlx5_cq.refcount is used as pure reference counter. Convert it to refcount_t and fix up the operations.
Suggested-by: Kees Cook <[email protected]> Reviewed-by: David Windsor <[email protected]> Reviewed-by: Hans Liljestrand <[email protected]> Signed-off-by: Elena Reshetova <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6, v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8, v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3 |
|
| #
30aa60b3 |
| 03-Jan-2017 |
Eli Cohen <[email protected]> |
IB/mlx5: Support 4k UAR for libmlx5
Add fields to structs to convey to kernel an indication whether the library supports multi UARs per page and return to the library the size of a UAR based on the
IB/mlx5: Support 4k UAR for libmlx5
Add fields to structs to convey to kernel an indication whether the library supports multi UARs per page and return to the library the size of a UAR based on the queried value.
Signed-off-by: Eli Cohen <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
| #
5fe9dec0 |
| 03-Jan-2017 |
Eli Cohen <[email protected]> |
IB/mlx5: Use blue flame register allocator in mlx5_ib
Make use of the blue flame registers allocator at mlx5_ib. Since blue flame was not really supported we remove all the code that is related to b
IB/mlx5: Use blue flame register allocator in mlx5_ib
Make use of the blue flame registers allocator at mlx5_ib. Since blue flame was not really supported we remove all the code that is related to blue flame and we let all consumers to use the same blue flame register. Once blue flame is supported we will add the code. As part of this patch we also move the definition of struct mlx5_bf to mlx5_ib.h as it is only used by mlx5_ib.
Signed-off-by: Eli Cohen <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]>
show more ...
|
|
Revision tags: v4.10-rc2, v4.10-rc1, v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7, v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7 |
|
| #
27827786 |
| 15-Jul-2016 |
Saeed Mahameed <[email protected]> |
{net,IB}/mlx5: CQ commands via mlx5 ifc
Remove old representation of manually created CQ commands layout, and use mlx5_ifc canonical structures and defines.
Signed-off-by: Saeed Mahameed <saeedm@me
{net,IB}/mlx5: CQ commands via mlx5 ifc
Remove old representation of manually created CQ commands layout, and use mlx5_ifc canonical structures and defines.
Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]>
show more ...
|
|
Revision tags: v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4 |
|
| #
89ea94a7 |
| 17-Jun-2016 |
Maor Gottlieb <[email protected]> |
IB/mlx5: Reset flow support for IB kernel ULPs
The driver exposes interfaces that directly relate to HW state. Upon fatal error, consumers of these interfaces (ULPs) that rely on completion of all t
IB/mlx5: Reset flow support for IB kernel ULPs
The driver exposes interfaces that directly relate to HW state. Upon fatal error, consumers of these interfaces (ULPs) that rely on completion of all their posted work-request could hang, thereby introducing dependencies in shutdown order. To prevent this from happening, we manage the relevant resources (CQs, QPs) that are used by the device. Upon a fatal error, we now generate simulated completions for outstanding WQEs that were not completed at the time the HW was reset.
It includes invoking the completion event handler for all involved CQs so that the ULPs will poll those CQs. When polled we return simulated CQEs with IB_WC_WR_FLUSH_ERR return code enabling ULPs to clean up their resources and not wait forever for completions upon receiving remove_one.
The above change requires an extra check in the data path to make sure that when device is in error state, the simulated CQEs will be returned and no further WQEs will be posted.
Signed-off-by: Maor Gottlieb <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
show more ...
|
|
Revision tags: v4.7-rc3, v4.7-rc2, v4.7-rc1, v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4 |
|
| #
94c6825e |
| 17-Apr-2016 |
Matan Barak <[email protected]> |
net/mlx5_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx
net/mlx5_core: Use tasklet for user-space CQ completion events
Previously, we've fired all our completion callbacks straight from our ISR.
Some of those callbacks were lightweight (for example, mlx5 Ethernet napi callbacks), but some of them did more work (for example, the user-space RDMA stack uverbs' completion handler). Besides that, doing more than the minimal work in ISR is generally considered wrong, it could even lead to a hard lockup of the system. Since when a lot of completion events are generated by the hardware, the loop over those events could be so long, that we'll get into a hard lockup by the system watchdog.
In order to avoid that, add a new way of invoking completion events callbacks. In the interrupt itself, we add the CQs which receive completion event to a per-EQ list and schedule a tasklet. In the tasklet context we loop over all the CQs in the list and invoke the user callback.
Signed-off-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
show more ...
|
|
Revision tags: v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5, v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1 |
|
| #
0b6e26ce |
| 17-Jan-2016 |
Doron Tsur <[email protected]> |
net/mlx5_core: Fix trimming down IRQ number
With several ConnectX-4 cards installed on a server, one may receive irqn > 255 from the kernel API, which we mistakenly trim to 8bit.
This causes EQ cre
net/mlx5_core: Fix trimming down IRQ number
With several ConnectX-4 cards installed on a server, one may receive irqn > 255 from the kernel API, which we mistakenly trim to 8bit.
This causes EQ creation failure with the following stack trace: [<ffffffff812a11f4>] dump_stack+0x48/0x64 [<ffffffff810ace21>] __setup_irq+0x3a1/0x4f0 [<ffffffff810ad7e0>] request_threaded_irq+0x120/0x180 [<ffffffffa0923660>] ? mlx5_eq_int+0x450/0x450 [mlx5_core] [<ffffffffa0922f64>] mlx5_create_map_eq+0x1e4/0x2b0 [mlx5_core] [<ffffffffa091de01>] alloc_comp_eqs+0xb1/0x180 [mlx5_core] [<ffffffffa091ea99>] mlx5_dev_init+0x5e9/0x6e0 [mlx5_core] [<ffffffffa091ec29>] init_one+0x99/0x1c0 [mlx5_core] [<ffffffff812e2afc>] local_pci_probe+0x4c/0xa0
Fixing it by changing of the irqn type from u8 to unsigned int to support values > 255
Fixes: 61d0e73e0a5a ('net/mlx5_core: Use the the real irqn in eq->irqn') Reported-by: Jiri Pirko <[email protected]> Signed-off-by: Doron Tsur <[email protected]> Signed-off-by: Matan Barak <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5, v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6 |
|
| #
90b3e38d |
| 28-May-2015 |
Rana Shahout <[email protected]> |
net/mlx5_core: Modify CQ moderation parameters
Introduce mlx5_core_modify_cq_moderation() to be used by the netdev, to set hardware coalescing.
Signed-off-by: Rana Shahout <[email protected]> Sign
net/mlx5_core: Modify CQ moderation parameters
Introduce mlx5_core_modify_cq_moderation() to be used by the netdev, to set hardware coalescing.
Signed-off-by: Rana Shahout <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Amir Vadai <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|
|
Revision tags: v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2, v4.1-rc1, v4.0, v4.0-rc7 |
|
| #
ce0f7509 |
| 02-Apr-2015 |
Saeed Mahameed <[email protected]> |
net/mlx5_core: Modify arm CQ in preparation for upcoming Ethernet driver
Pass consumer index as a parameter to arm CQ
Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Eli Cohen <e
net/mlx5_core: Modify arm CQ in preparation for upcoming Ethernet driver
Pass consumer index as a parameter to arm CQ
Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Eli Cohen <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|
| #
302bdf68 |
| 02-Apr-2015 |
Saeed Mahameed <[email protected]> |
net/mlx5_core: Fix Mellanox copyright note
Signed-off-by: Achiad Shochat <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Eli Cohen <[email protected]> Signed-
net/mlx5_core: Fix Mellanox copyright note
Signed-off-by: Achiad Shochat <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Eli Cohen <[email protected]> Signed-off-by: David S. Miller <[email protected]>
show more ...
|