415 Commits

Author SHA1 Message Date
Bing Zhao
37cd4501e8 net/mlx5: support two ports hairpin mode
In order to support hairpin between two ports, mlx5 PMD needs to
implement the functions and provide them as the function pointers.

The bind and unbind functions are executed per port pairs. All the
hairpin queues between the two ports should have the same attributes
during queues setup. Different configurations among queue pairs from
the same ports are not supported. It is allowed that two ports only
have one direction hairpin.

In order to set up the connection between two queues, peer Rx queue
HW information must be fetched via the internal RTE API and the queue
information could be used to modify the SQ object. Then the RQ object
will be modified with the Tx queue HW information. The reverse
operation is not supported right now.

When disconnecting the queues pair, SQ and RQ object should be reset
without any peer HW information. The unbinding operation will try to
disconnect all Tx queues from the port from the Rx queues of the peer
port.

Tx explicit mode attribute will be saved and used when creating a
hairpin flow.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:03 +01:00
Viacheslav Ovsiienko
a0a45e8af7 net/mlx5: configure Rx queue for buffer split
The scatter-gather elements should be configured
accordingly to support the buffer split feature.
The application provides the desired settings for
the segments at the beginning of the packets and
PMD pads the buffer chain (if needed) with attributes
of last specified segment to accommodate the packet
of maximal length.

There are some limitations are implied. The MPRQ
feature should be disengaged if split is requested,
due to MPRQ neither supports pushing data to the
dedicated pools nor follows the flexible buffer sizes.
The vectorized rx_burst routines does not support
the scattering (these ones are extremely simplified
and work over the single segment only) and can't
handle split as well.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:35:02 +01:00
Viacheslav Ovsiienko
9f209b59c8 net/mlx5: support Rx buffer split description
The routine to provide Rx queue setup with specifying
extended receiving buffer description is added.
It allows application to specify desired segment
lengths, data position offsets in the buffer
and dedicated memory pool for each segment.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:35:02 +01:00
Gregory Etelson
4ec6360de3 net/mlx5: implement tunnel offload
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
 - apply matches to both outer and inner packet headers
   during entire offload procedure;
 - restore outer header of partially offloaded packet;
 - model is implemented as a set of helper functions.

Implementation details:
* tunnel_offload PMD parameter must be set to 1 to enable the feature.
* application cannot use MARK and META flow actions with tunnel.
* offload JUMP action is restricted to steering tunnel rule only.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:02 +01:00
Andrey Vesnovaty
d7cfcddded net/mlx5: translate shared action for RSS action
Handle shared action on flow validation/creation/destruction.
mlx5 PMD translates shared action into a regular one before handling
flow validation/creation. The shared action translation applied to
utilize the same execution path for both shared and regular actions.
The current implementation supports shared action translation for shared
RSS action only.

RSS action validation split to validate shared RSS action on its
creation in addition to action validation in flow validation/creation
path.

Implement rte_flow shared action API for mlx5 PMD, mostly forwarding
calls to flow driver operations (see struct mlx5_flow_driver_ops).

Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:02 +01:00
Andrey Vesnovaty
b8cc58c140 net/mlx5: modify hash Rx queue objects
Implement modification for hashed table of Rx queue object (see
mlx5_hrxq_modify()). This implementation relies on the capability to
modify TIR object via DevX API, i.e. current implementation doesn't
support verbs HW object operations. The functionality to modify hashed
table of Rx queue object is prerequisite to implement
rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.

Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:02 +01:00
Xueming Li
16dbba257c net/mlx5: fix port shared data reference count
When probe a representor, tag cache hash table and modification cache
hash table allocated memory upon each port, overwrote previous existing
cache in shared context data.

This patch moves reference check of shared data prior to hash table
allocation to avoid such issue.

Fixes: 6801116688fe ("net/mlx5: fix multiple flow table hash list")
Fixes: 1ef4cdef2682 ("net/mlx5: fix flow tag hash list conversion")
Cc: stable@dpdk.org

Acked-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
2b5b1aeb39 net/mlx5: optimize counter extend memory
Counter extend memory was allocated for non-batch counter to save the
extra DevX object. Currently, for non-batch counter which does not
support aging, entry in the generic counter struct is used only when
counter is free in free list, and bytes in the struct is used only when
counter is allocated in using.

In this case, the DevX object can be saved to the generic counter struct
union with entry memory when counter is allocated and union with bytes
when counter is free.
And pool type is also not needed as non-fallback mode only has generic
counter and aging counter, just a bit to indicate the pool is aged or
not will be enough.

This eliminates the counter extend info struct saves the memory.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
cfbdc3f938 net/mlx5: rename flow counter macro
Add the MLX5_ prefix to the defined counter macro names.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
e7138997e0 net/mlx5: make shared counters thread safe
The shared counters save the counter index to three level table. As
three level table supports multiple-thread operations now, the shared
counters can take advantage of the table to support multiple-thread.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
3aa279157f net/mlx5: synchronize flow counter pool creation
Currently, counter operations are not thread safe as the counter
pools' array resize is not protected.

This commit protects the container pools' array resize using a spinlock.
The original counter pool statistic memory allocate is moved to the
host thread in order to minimize the critical section. Since that pool
statistic memory is required only in query time. The container pools'
array should be resized by the user threads, the new pool may be used
by other rte_flow APIs before the host thread resize is done, if the
pool is not saved to the pools' array, the specified counter memory will
not be found as the pool is not saved to the counter management pool
array. The pool raw statistic memory will be filled in host thread.

The shared counters will be protected in other commit.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
994829e695 net/mlx5: remove single counter container
A flow counter which was allocated by a batch API couldn't be assigned
to a flow in the root table (group 0) in old rdma-core version.
Hence, a root table flow counter required PMD mechanism to manage
counters which were allocated singly.

Currently, the batch counters have already been supported in root table
includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET
enum and with a kernel driver includes
MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.

When the PMD uses rdma-core API to assign a batch counter to a root
table flow using invalid counter offset, it should get an error only
if the batch counter assignment for root table is supported.
Using this trial in the initialization time can help to detect the
support.

Using the above trial, if the support is valid, remove the management of
single counter container in the fast counter mechanism. Otherwise, move
the counter mechanism to fallback mode.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
df051a3e77 net/mlx5: optimize shared counter memory
Instead of using special memory to indicate shared counter, this patch
does the optimization to use the counter handler reserved memory to
indicate it.  The counter index with MLX5_CNT_SHARED_OFFSET means the
shared counter.

This patch is also an arrangement for a new adjustment to use batch
counter as shared counter.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
6b7c717ed1 net/mlx5: locate aging pools in the general container
Commit [1] introduced different container for the aging counter
pools. In order to save container memory the aging counter pools
can be located in the general pool container.

This patch locates the aging counter pools in the general pool
container. Remove the aging container management.

[1] commit fd143711a6ea ("net/mlx5: separate aging counter pool range")

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Dekel Peled
d5a7d04c79 net/mlx5: support query of age action
Recent patch [1] adds to ethdev the API for query of age action.
This patch implements in MLX5 PMD the query of age action using
this API.

[1] https://mails.dpdk.org/archives/dev/2020-October/184864.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 22:29:25 +01:00
Ivan Ilchenko
62024eb827 ethdev: change stop operation callback to return int
Change eth_dev_stop_t return value from void to int.
Make eth_dev_stop_t implementations across all drivers to return
negative errno values if case of error conditions.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 22:26:41 +02:00
Jiawei Wang
00c10c2211 net/mlx5: update translate function for mirroring
Translate the attribute of sample action that include sample ratio
and sub actions list.
PMD will check the destination action number in current flow,
if found multiple destination actions, then create the new destination
array rdma action that group actions for each destination.
Currently only support port or queue for destination action, and only
encap action can be attached into one port destination.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
b4c0ddbfcc net/mlx5: split sample flow into two sub-flows
The flow with sample action will be split into two sub flows:
the prefix sub flow with the all actions preceding the sample
action and sample action itself, and the suffix sub flow with
the actions following the sample action.

The original items remain in the prefix sub flow, add the
implicit tag action with unique id to set in metadata register,
and suffix sub flow uses the tag item to match with that unique id.

The flow split as below:

Original flow: items / actions pre / sample / actions sfx ->
    prefix sub flow -
            items / actions pre / set_tag action / sample
    suffix sub flow -
            tag_item / actions sfx

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
96b1f0273c net/mlx5: validate sample action
Add sample action validate function.

Sample Flow is supported in NIC-RX and FDB domains. For the NIC-RX
the Sample Flow action list must include the destination queue action.

Only NIC-RX domain supports the optional actions list. FDB doesn't
support any optional actions, the sampled packets is always forwarded
to the E-Switch manager port.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Michael Baum
e96242efa4 net/mlx5: remove Rx queue object type field
Once the separation between Verbs and DevX is done using function
pointers, the type field of the Rx queue object structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
4c6d80f1c5 net/mlx5: separate Rx queue state modification
Separate Rx state modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
354cc08a2d net/mlx5: remove Tx queue object type field
Once the separation between Verbs and DevX is done using function
pointers, the type field of the Tx queue object structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
5d9f3c3f48 net/mlx5: separate Tx queue object modification
Separate Tx object modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
f49f44839d net/mlx5: share Tx control code
Move Tx object similar resources allocations and debug logs from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
86d259cec8 net/mlx5: separate Tx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.

Define operation structure and DevX module in addition to the existing
Linux Verbs module.
Separate Tx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
e7055bbfbe net/mlx5: reposition event queue number field
The eqn field has become a field of sh directly since it is also
relevant for Tx and Rx.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Suanming Mou
3e8f3e51fd net/mlx5: fix meter table definitions
As metering and metadata features were developed at the same time. The
metering and metadata tables are defined conflicted.

This cause the meter suffix flow jump to the same metadata table and
cause flow deadloop.

Adjust the metering table define to fix that issue.

Fixes: 46a5e6bc6a85 ("net/mlx5: prepare meter flow tables")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-08 19:58:11 +02:00
Thomas Monjalon
b142387b07 ethdev: allow drivers to return error on close
The device operation .dev_close was returning void.
This driver interface is changed to return an int.

Note that the API rte_eth_dev_close() is still returning void,
although a deprecation notice is pending to change it as well.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Reviewed-by: Sachin Saxena <sachin.saxena@oss.nxp.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
2020-09-30 19:19:13 +02:00
Suanming Mou
bf615b077d net/mlx5: manage header reformat actions with hashed list
To manage encap decap header format actions mlx5 PMD used the single
linked list and lookup and insertion operations took too long times if
there were millions of objects and this impacted the flow
insertion/deletion rate.

In order to optimize the performance the hashed list is engaged. The
list implementation is updated to support non-unique keys with few
collisions.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-09-30 19:19:09 +02:00
Xueming Li
c21e5facf7 net/mlx5: use bond index for netdev operations
In case of bonding, device ifindex was detected as the PF ifindex, so
any operation using ifindex applied to PF instead of the bond device.
These operations includes MTU get/set, up/down and mac address
manipulation, etc.

This patch detects bond interface ifindex and name for PF that join a
bond interface, uses it by default for netdev operations.

Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-09-30 19:19:09 +02:00
Michael Baum
0c762e81da net/mlx5: share Rx queue drop action code
Move Rx queue drop action similar resources allocations from Verbs
module to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
5eaf882e94 net/mlx5: separate Rx queue drop
Separate Rx queue drop creation into both Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
5a959cbfa6 net/mlx5: share Rx hash queue code
Move Rx hash queue object similar resources allocations from DevX and
Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
25ae7f1a5d net/mlx5: share Rx queue indirection table code
Move Rx indirection table object similar resources allocations from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
66b96fa6a6 net/mlx5: remove indirection table type field
Once the separation between Verbs and DevX is done using function
pointers, the type field of the indirection table structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
85552726d3 net/mlx5: separate Rx hash queue creation
Separate Rx hash queue creation into both Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
87e2db37ef net/mlx5: separate Rx indirection table object creation
Separate Rx indirection table object creation into both Verbs and DevX
modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
c279f187ee net/mlx5: separate Rx queue object modification
Separate Rx object modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
1260a87b28 net/mlx5: share Rx control code
Move Rx object similar resources allocations and debug logs from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
322870799e net/mlx5: separate Rx interrupt handling
Separate interrupt event handler into both Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Michael Baum
6deb19e1b2 net/mlx5: separate Rx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.

Define operation structure and DevX module in addition to the existing
linux Verbs module.
Separate Rx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-09-18 18:55:08 +02:00
Ophir Munk
7af10d29a4 net/mlx5/linux: refactor VLAN
File mlx5_vlan.c contains Netlink APIs (Linux dependent) as part of VM
workaround implementation. Move this implementation to file
linux/mlx5_vlan_os.c.  To remove Netlink dependency in header files
change pointer of type 'struct mlx5_nl_vlan_vmwa_context *' to 'void *'.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-09-18 18:55:06 +02:00
Ophir Munk
8bb2410ea3 net/mlx5: separate VLAN strip modification
When updating a queue vlan stripping offload - either the WQ is modified
in Verbs or the RQ is modified in DevX.  Add a vlan stripping modify
callback to 'struct mlx5_obj_ops' and assign it with the specific Verbs
and DevX implementations: 'rxq_obj_modify_wq_vlan_strip' and
'rxq_obj_modify_rq_vlan_strip' respectively.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-09-18 18:55:06 +02:00
Ophir Munk
1f66ac5bbe net/mlx5: remove more Direct Verbs dependencies
Several DV-based structs of type 'struct mlx5dv_devx_XXX' are replaced
with 'void *' to enable compilation under non-Linux operating systems.
New getter functions were added to retrieve the specific fields that
were previously accessed directly.

Replaced structs:
'struct mlx5dv_pp *'
'struct mlx5dv_devx_event_channel *'
'struct mlx5dv_devx_umem *'
'struct mlx5dv_devx_uar *'

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-09-18 18:55:06 +02:00
Ophir Munk
f00f6562e1 net/mlx5: remove netlink dependency in shared code
This commit adds Linux implementation of routine mlx5_os_mac_addr_flush
as wrapper to Netlink API to avoid direct calls under non-Linux
operating systems.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-09-18 18:55:06 +02:00
Ophir Munk
e9c0b96e35 net/mlx5: move Linux ifname function
mlx5_get_ifname() prototype includes 'IF_NAMESIZE' definition from Linux
file net/if.h. Since this API is only used under Linux and to enable
compilation under non-Linux OS - move this prototype from shared file
mlx5.h to file linux/mlx5_os.h.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-09-18 18:55:06 +02:00
Suanming Mou
3fe889617b net/mlx5: manage modify actions with hashed list
To manage header modify actions mlx5 PMD used the single linked list and
lookup and insertion operations took too long times if there were
millions of objects and this impacted the flow insertion/deletion rate.

In order to optimize the performance the hashed list is engaged. The
list implementation is updated to support non-unique keys with few
collisions.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-09-18 18:55:06 +02:00
Suanming Mou
e1293b10de net/mlx5: fix counter query
Currently, the counter query requires the counter ID should start
with 4 aligned. In none-batch mode, the counter pool might have the
chance to get the counter ID not 4 aligned. In this case, the counter
should be skipped, or the query will be failed.

Skip the counter with ID not 4 aligned as the first counter in the
none-batch count pool to avoid invalid counter query. Once having
new min_dcs ID in the poll less than the skipped counters, the
skipped counters will be returned to the pool free list to use.

Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-30 00:41:23 +02:00
Parav Pandit
392bf9084d common/mlx5: register class drivers through common layer
Migrate mlx5 net, vdpa and regex PMD to start using mlx5 common class
driver.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-28 19:01:30 +02:00
Viacheslav Ovsiienko
161d103b23 net/mlx5: add queue start and stop
The mlx5 PMD did not support queue_start and queue_stop eth_dev API
routines, queue could not be suspended and resumed during device
operation.

There is the use case when this feature is crucial for applications:

- there is the secondary process handling the queue
- secondary process crashed/aborted
- some mbufs were allocated or used by secondary application
- some mbufs were allocated by Rx queues to receive packets
- some mbufs were placed to send queue
- queue goes to undefined state

In this case there was no reliable way to recovery queue handling
by restarted secondary process but reset queue to initial state
freeing all involved resources, including buffers involved in queue
operations, reset the mbuf pools, and then reinitialize queue
to working state:

- reset mbuf pool, allocate all mbuf to initialize pool into
  safe state after the crush and allow safe mbuf free calls
- stop queue, free all potentially involved mbufs
- reset mbuf pool again
- start queue, reallocate mbufs needed

This patch introduces the queue start/stop feature with some
limitations:

- hairpin queues are not supported
- it is application responsibility to synchronize start/stop
  with datapath routines, rx/tx_burst must be suspended during
  the queue_start/queue_stop calls
- it is application responsibility to track queue usage and
  provide coordinated queue_start/queue_stop calls from
  secondary and primary processes.
- Rx queues with vectorized Rx routine and engaged CQE
  compression are not supported by this patch currently

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-07-21 15:46:30 +02:00