Commit Graph

221 Commits

Author SHA1 Message Date
Jiawei Wang
e39226bde5 net/mlx5: control flow rules with identical pattern
In order to allow\disallow configuring rules with identical
patterns, the new device argument 'allow_duplicate_pattern'
is introduced.
If allow, these rules be inserted successfully and only the
first rule take affect.
If disallow, the first rule will be inserted and other rules
be rejected.

The default is to allow.
Set it to 0 if disallow, for example:
	-a <PCI_BDF>,allow_duplicate_pattern=0

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:35 +02:00
Shun Hao
a3b7af90ba net/mlx5: validate meter action in policy
This adds the validation when creating a policy with meter action.

Currently meter action is only allowed for green color in policy, and
8 meters are supported at maximum in one meter hierarchy.

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:35 +02:00
Shun Hao
48fbc1be82 net/mlx5: fix meter policy flow match item
Currently when creating meter policy, a src port_id match item will
always be added in switch domain. So if one meter is used by another
port, it will not work correctly.

This issue is solved:
1. If policy fate action is port_id, add the src port_id match item,
and the meter cannot be shared by another port.
2. If policy fate action isn't port_id, don't add the src port_id
match, meter can be shared by another port.

This fix enables one meter being shared by different ports. User can
create a meter flow using a port_id match item to make this meter
shared by other port.

Fixes: afb4aa4f12 ("net/mlx5: support meter policy operations")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:26 +02:00
Dmitry Kozlyuk
e60561cc95 doc: add limitation for ConnectX-4 with L2 in mlx5 guide
ConnectX-4 and ConnectX-4 Lx NICs require all L2 headers of transmitted
packets to be inlined. By default only first 18 bytes are inlined,
which is insufficient if additional encapsulation is used, like Q-in-Q.
Thus, default settings caused such traffic to be dropepd on Tx.
Document a recommendation to increase inlined data size in such cases.

Fixes: 505f1fe426 ("net/mlx5: add Tx devargs")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-08 22:09:23 +02:00
Tal Shnaiderman
a6a18d06f5 net/mlx5: add TCP and IPv6 to supported items for Windows
WINOF2 2.70 Windows kernel driver allows DevX rule creation
of types TCP and IPv6.

Added the types to the supported items in mlx5_flow_os_item_supported
to allow them to be created in the PMD.

Added description of new rules support in Windows kernel driver WINOF2 2.70
to the mlx5 driver guide.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:13 +02:00
Thomas Monjalon
4821fa1099 doc: improve lstopo tip
The tool lstopo from hwloc package can provide a graphical
or textual view.
In its textual form, the option --merge gives a shorter summary
which fits well with the DPDK need.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2021-05-21 15:42:02 +02:00
Asaf Penso
bdbe00de10 doc: fix description of some mlx5 features
The support of the new RTE_FLOW_ITEM_TYPE_INTEGRITY
was added in the release notes 21.02 by mistake.

The support of the Sub-Function representors was missing
in the release notes and the mlx5 guide.

Fixes: 79f8952783 ("net/mlx5: support integrity flow item")
Fixes: cb95feefdd ("net/mlx5: support sub-function representor")

Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2021-05-19 16:20:35 +02:00
Bing Zhao
4f74cb68b9 net/mlx5: support connection tracking between two ports
After creating a connection tracking context, it can be used between
two ports. For each port, the flow for one direction traffic will
be created.

The context can only be shared between the owner port and the peer
port that was specified when being created. Only the owner port
could update the context or query it in current implementation.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:18 +02:00
Bing Zhao
2db75e8b1d net/mlx5: add actions for connection tracking creation
Allocating a CT from the management pools and creating the DR actions
for both directions by default.

If there is no available connection tracking action, a new pool will
be created with a fixed size bulk allocation. Right now, all the
resources are controlled by the linked list.

The ASO connection tracking context associated with these actions
need to be updated via WQE before using for steering.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:13 +02:00
Bing Zhao
8ebbc01f42 net/mlx5: use meter color register for connection tracking
Based on the capacity, 3 registers could be used. Due to the register
allocation, only the one REG_C_3 for meter color could be reused
right now.

Then in the same flow, no more than one ASO action can be supported.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:11 +02:00
Bing Zhao
0c6285b727 common/mlx5: check connection tracking offload capability
During startup, the ASO connection tracking offload capability could
be queried via HCA_CAP_QUERY command. If the HW doesn't support ASO
CT, the value would be 0 by default. The following initialization
should be skipped and the creation of the CT object should return
a failure directly.

The following CT creation should also check this capability. With
the old driver, the pre-processing macro should be used in order to
make the compiling pass.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:10 +02:00
Gregory Etelson
79f8952783 net/mlx5: support integrity flow item
MLX5 PMD supports the following integrity filters for outer and
inner network headers:
- l3_ok
- l4_ok
- ipv4_csum_ok
- l4_csum_ok

`level` values 0 and 1 reference outer headers.
`level` > 1 reference inner headers.

Flow rule items supplied by application must explicitly specify
network headers referred by integrity item. For example:
flow create 0 ingress
  pattern
    integrity level is 0 value mask l3_ok value spec l3_ok /
    eth / ipv6 / end …

or

flow create 0 ingress
  pattern
    integrity level is 0 value mask l4_ok value spec 0 /
    eth / ipv4 proto is udp / end …

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-04 17:37:22 +02:00
Michael Baum
f3191849f2 net/mlx5: support flow count action handle
Existing API supports counter action to count traffic of a single flow.
The user can share the count action among different flows using the
shared flag and the same counter ID in the count action configuration.

Recent patch [1] introduced the indirect action API.
Using this API, an action can be created as indirect, unattached to any
flow rule.
Multiple flows can then be created using the same indirect action.
The new API also supports query operation of an indirect action.

The new API is more efficient because the driver gets it's own handler
for the count action instead of managing a mapping between the user ID
to the driver handle.

Support create, query and destroy indirect action operations for flow
count action.

Application will use the indirect action query operation to query this
count action.

In the meantime the old sharing mechanism (with the sharing flag)
continues to be supported, and the user can choose the way he wants to
share the counter.
The new indirect action API is only supported in DevX, so sharing
counter action in Verbs can only be done through the old mechanism.

[1] https://mails.dpdk.org/archives/dev/2020-July/174110.html

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-30 12:41:07 +02:00
Li Zhang
aa065a9cf3 net/mlx5: support meter PPS profile
Currently meter algorithms only supports bytes units for meter profiles.
Using ASO feature, the driver can support metering in per packet units.

Add support for packet units in meter profiles.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-27 14:27:08 +02:00
Shun Hao
51ec04dc7b net/mlx5: connect meter policy to created flows
Currently ASO meter must be followed by policy table, so this adds
the support that connecting meter and policy table.

There are several cases to be considered:
1. For non-termination policy, connect meter to the default policy
table.
2. For non-RSS termination policy case, simply get the policy
table id and connect meter to it.
3. For RSS termination policy case, need to split the flow due
to RSS info in policy, and translate each sub-flow using that RSS,
then create the sub policy table to be connected.
4. In termination policy case, if there's no actions to modify the
packet before meter, no need to use set_tag to save meter id in
register. Only add a new flow in drop table using the same match
criteria as suf-flow, to save cache miss.

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-27 13:20:36 +02:00
Li Zhang
5df35867d9 net/mlx5: optimize meter statistics
Meter statistics are each policer action each counter.
Totally 4 counters per each meter.
It causes cache missed
and lead to data forwarding performance low.

To optimize it, support pass counter for green
and drop counter for red.
Totally two counters per each meter.
Also use the global drop statistics for
all meter drop action.

Limitations as below:
1. It does not support yellow counter and return 0.
2. All the meter colors with drop action will be
   counted only by the global drop statistics.
3. Red color must be with drop action.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:27:49 +02:00
Haifei Luo
50c383793b ethdev: dump single flow rule
Previous implementations support dump all the flows. Add new arg
rte_flow in rte_flow_dev_dump to dump one flow.

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-04-14 13:19:55 +02:00
Dong Zhou
cb299214a6 doc: update push/pop VLAN support in mlx5 guide
Updates the documentation for push/pop VLAN support. In E-Switch
mode, push VLAN on ingress traffic and pop VLAN in egress traffic
are both support.

Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Reviewed-by: Asaf Penso <asafp@nvidia.com>
2021-04-13 13:37:50 +02:00
Salem Sol
fd44e8288f net/mlx5: support NVGRE encap action in sampling
Add support for NVGRE encap as a sample action
and validate it.

Signed-off-by: Salem Sol <salems@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-04-08 01:09:24 +02:00
Salem Sol
be47c9819f net/mlx5: support VXLAN encap action in sampling
Add support for VXLAN encap as a sample action
and validate it.

Signed-off-by: Salem Sol <salems@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-04-08 01:09:24 +02:00
Alexander Kozyrev
5cc6764267 net/mlx5: reject tunnel ID modification
Modification of the 802.1Q Tag Identifier, VXLAN Network
Identifier or GENEVE Network Identifier is not supported.
Reject attempt to modify these fields via the MODIFY_FIELD
action and document this mlx5 driver limitation.

Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:21:30 +02:00
Xueming Li
08c2772fc7 net/mlx5: support list of representor PF
To probe representors from different kernel bonding PFs, had to specify
2 separate devargs like this:
    -a 03:00.0,representor=pf0vf[0-3] -a 03:00.0,representor=pf1vf[0-3]

This patch supports range or list of PF section in devargs, so the
alternative short devargs of above is:
    -a 03:00.0,representor=pf[0-1]vf[0-3]

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:30 +02:00
Xueming Li
f926cce3fa net/mlx5: refactor bonding representor probing
To probe representor on 2nd PF of kernel bonding device, had to specify
PF1 BDF in devarg:
  <PF1_BDF>,representor=0
When closing bonding device, all representors had to be closed together
and this implies all representors have to use primary PF of bonding
device. So after probing representor port on 2nd PF, when locating new
probed device using device argument, the filter used 2nd PF as PCI
address and failed to locate new device.

Conflict happened by using current representor devargs:
 - Use PCI BDF to specify representor owner PF
 - Use PCI BDF to locate probed representor device.
 - PMD uses primary PCI BDF as PCI device.

To resolve such conflicts, new representor syntax is introduced here:
  <primary BDF>,representor=pfXvfY
All representors must use primary PF as owner PCI device, PMD internally
locate owner PCI address by checking representor "pfX" part. To EAL, all
representors are registered to primary PCI device, the 2nd PF is hidden
to EAL, thus all search should be consistent.

Same to VF representor, HPF (host PF on BlueField) uses same syntax to
probe, example: representor=pf1vf[0-3,-1]

This patch also adds pf index into kernel bonding representor port name:
	<BDF>_<ib_name>_representor_pf<X>vf<Y>

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:28 +02:00
Xueming Li
cb95feefdd net/mlx5: support sub-function representor
This patch adds support for SF representor. Similar to VF representor,
switch port name of SF representor in phys_port_name sysfs key is
"pf<x>sf<y>".

Device representor argument is "representors=sf[list]", list member
could be mix of instance and range. Example:
  representors=sf[0,2,4,8-12,-1]

To probe VF representor and SF representor, need to separate into 2
devices:
  -a <BDF>,representor=vf[list] -a <BDF>,representor=sf[list]

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:25 +02:00
Jiawei Wang
32a74d8127 doc: update sample actions support in mlx5 guide
Updates the documentation for supported sample actions in the NIC Rx
and E-Switch steering flow.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-23 13:52:31 +01:00
Thomas Monjalon
de34aaa96b doc: replace hugepages commands with dedicated tool
The tool dpdk-hugepages.py, added in DPDK 20.11,
is referenced in the guides instead of more complicate commands.

The original Linux commands are kept in linux_gsg/sys_reqs.rst
and nics/build_and_test.rst.

Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
2021-02-11 23:26:37 +01:00
Sarosh Arif
e3f15be4d4 doc: replace testpmd with dpdk-testpmd in commands
replace testpmd with dpdk-testpmd in all commands
because on compilation through meson, dpdk-testpmd is the default
application name.

Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2021-02-11 15:42:18 +01:00
Viacheslav Ovsiienko
b8ee0a16cb doc: fix mark action zero value in mlx5 guide
The zero value in flow MARK action is reported in Rx datapath
as tagged with zero FDIR ID. Once packet is marked in flow engine
it will be always reported as tagged. For metadata only the zero
value means there is "no metadata" in the packet and the metadata
flag is not set for the case.

Fixes: 3ceeed9f78 ("doc: update flow mark action in mlx5 guide")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-02-10 22:27:03 +01:00
Viacheslav Ovsiienko
5f5b0ac904 doc: fix supported feature table in mlx5 guide
This sets the correct minimal requirements for these features:

- Buffer Split offload is supported/verified on ConnectX-5
- Tx scheduling requires ConnectX-6DX and depends on firmware version

Fixes: cb7b0c24c8 ("doc: update hardware offloads support in mlx5 guide")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Asaf Penso <asafp@nvidia.com>
2021-02-04 18:19:37 +01:00
Alexander Kozyrev
fdc44cdc78 net/mlx5: fix miniCQE configuration for Verbs
Verbs cannot be used to configure newly introduced miniCQE formats for
Flow Tag and L3/L4 Header compression. Support for these formats has
been added to the DevX configuration only. And the RX queue descriptor
has been updated with the CQE compression format information only as
well. But the datapath relies on this info no matter which method is
used for Rx queues configuration. Set proper CQE compression format
information in the Verbs configuration to fix the miniCQE parsing logic.

Fixes: 54c2d46b16 ("net/mlx5: support flow tag and packet header miniCQEs")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-02-04 18:19:36 +01:00
Xiaoyu Min
db5866c870 doc: group mlx5 shared actions
Put all supported shared actions in one new table

Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Reviewed-by: Asaf Penso <asafp@nvidia.com>
2021-02-04 15:38:37 +01:00
Alexander Kozyrev
641dbe4fb0 net/mlx5: support modify field flow action
Add support for new MODIFY_FIELD action to the Mellanox PMD.
This is the generic API that allows to manipulate any packet
header field by copying data from another packet field or
mark, metadata, tag, or immediate value (or pointer to it).

Since the API is generic and covers a lot of action under its
umbrella it makes sense to implement all the mechanics gradually
in order to move to this API for any packet field manipulations
in the future. This is the first step of RTE flows consolidation.

The modify field RTE flow action supports three operations: set,
add and sub. This patch brings to live only the "set" operation.
Support is provided for any packet header field as well as
meta/tag/mark and immediate value can be used as a source.

There are few limitations for this first version of API support:
- encapsulation levels are not supported, just outermost header
can be manipulated for now.
- offsets can only be 4-bytes aligned: 32, 64 and 96 for IPv6.
- the special ITEM_START ID is not supported as we do not allow
to cross packet header field boundaries yet.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-29 18:16:11 +01:00
Viacheslav Ovsiienko
1d89c40453 net/mlx5: support mbuf fast free offload
This patch adds support of the mbuf fast free offload to the
transmit datapath. This offload allows freeing the mbufs on
transmit completion in the most efficient way. It requires
the all mbufs were allocated from the same pool, have
the reference counter value as 1, and have no any externally
attached buffers.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-29 18:16:08 +01:00
Viacheslav Ovsiienko
3ceeed9f78 doc: update flow mark action in mlx5 guide
There some limitations added for the MARK action value range.

Fixes: 2d241515eb ("net/mlx5: add devarg for extensive metadata support")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-29 18:16:08 +01:00
Dong Zhou
5f8ae44dd4 net/mlx5: enlarge maximal flow priority
Currently, the maximal flow priority in non-root table to user
is 4, it's not enough for user to do some flow match by priority,
such as LPM, for one IPV4 address, we need 32 priorities for each
bit of 32 mask length.

PMD will manage 3 sub-priorities per user priority according to L2,
L3 and L4. The internal priority is 16 bits, user can use priorities
from 0 - 21843.

Those enlarged flow priorities are only used for ingress or egress
flow groups greater than 0 and for any transfer flow group.

Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-29 18:16:07 +01:00
Jiawei Wang
07627fbf15 net/mlx5: support E-Switch mirroring with modify action
While there's the modify action and sample action with ratio=1
in the E-Switch flow, and modify action is after the sample
action, means that the modify should only impact on after sample.
MLX5 PMD will monitor the above case and split the E-Switch flow
into two sub flows, similar as sample flow did before:

 - the prefix sub flow with all actions preceding the sample and the
   sample action itself, also append the new jump action after sample
   in the prefix sub flow;
 - the suffix sub flow with the modify action and other actions
   following the sample action.

The flow split as below:

Original flow: items / actions pre / sample / modify / actions sfx
    prefix sub flow -
    items / actions pre / set_tag action / sample / jump
    suffix sub flow -
    tag_item / modify / actions sfx

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-29 18:16:07 +01:00
Jiawei Wang
6a951567c1 net/mlx5: support E-Switch mirroring and jump in one flow
mlx5 E-Switch mirroring is implemented as multiple destination array in
one steering table. The array currently supports only port ID as
destination actions.

This patch adds the jump action support to the array as one of
destination.
The packets can be mirrored to the port and jump to the next table in
the same destination array allowing to continue handling in the new
table.

For example:
    set sample_actions 0 port_id id 1 / end
    flow create 0 ingress transfer pattern eth / end actions
    sample ratio 1 index 0 / jump group 1 / end
    flow create 1 ingress transfer group 1 pattern eth / end actions
    set_mac_dst mac_addr 00:aa:bb:cc:dd:ee / port_id id 2 / end

The flow results all the matched ingress packets are mirrored
to port id 1 and go to group 1. In the group 1, packets are modified
with the destination mac and sent to port id 2.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-29 18:16:07 +01:00
Jiawei Wang
bd49d1d343 net/mlx5: handle RSS action in sample
PMD validates the rss action in the sample sub-actions list,
then translates into rdma-core action and it will be used for sample
path destination.

If the RSS action is in both sample sub-actions list and original flow,
the rss level and rss type in the sample sub-actions list should be
consistent with the original flow list, since the expanding items
for RSS should be the same for both actions.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-19 13:49:41 +01:00
Shiri Kuzin
e440d6cf58 net/mlx5: add GENEVE TLV option flow translation
The GENEVE TLV option matching flows must be created
using a translation function.

This function checks whether we already created a Devx
object for the matching and either creates the objects
or updates the reference counter.

Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-19 03:30:16 +01:00
Shiri Kuzin
06cd4cf63f net/mlx5: add GTP PSC item translation
This patch adds the translation function which
sets the qfi, PDU type.

The next extension header which indicates the following
extension header type is set to 0x85 - a PDU session
container.

Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-19 03:30:13 +01:00
Tal Shnaiderman
5881b2d2d9 doc: add Windows support for mlx5
Windows is supported by mlx5 PMD.
The mlx5 guide is updated with the needed information.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
2021-01-14 10:12:37 +01:00
Michael Baum
4a7f979af2 net/mlx5: remove CQE padding device argument
The data-path code doesn't take care on 'rxq_cqe_pad_en' and use padded
CQE for any case when the system cache-line size is 128B.

This makes the argument redundant.

Remove it.

Fixes: bc91e8db12 ("net/mlx5: add 128B padding of Rx completion entry")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Raslan Darawsheh
6c21c88736 doc: add ConnectX-6 Lx and BlueField-2 in mlx5 guide
This adds ConnectX-6 Lx and BlueField-2 to the list of NICs
supported by mlx5 PMD.

Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
2020-11-27 01:30:15 +01:00
Asaf Penso
cb7b0c24c8 doc: update hardware offloads support in mlx5 guide
In DPDK 20.11 the following offload features are added:
* Buffer Split
* Sampling
* Tunnel offload
* 2-port hairpin
* RSS shared action
* Age shared action

Update the relevant tables with OFED/rdma-core/NIC versions.

Signed-off-by: Asaf Penso <asafp@nvidia.com>
2020-11-27 01:30:15 +01:00
Asaf Penso
6457d0ecc1 doc: add Rx functions limitations in mlx5 guide
The mlx5 PMD supports various Rx burst functions.
Each function is enabled differently and supports different features.

Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-11-27 00:55:34 +01:00
Stephen Hemminger
db27370b57 eal: replace blacklist/whitelist options
Replace -w / --pci-whitelist with -a / --allow options
and --pci-blacklist with --block.
The -b short option remains unchanged.

Allow the old options for now, but print a nag
warning since old options are deprecated.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-11-16 00:11:22 +01:00
Alexander Kozyrev
54c2d46b16 net/mlx5: support flow tag and packet header miniCQEs
CQE compression allows us to save the PCI bandwidth and improve
the performance by compressing several CQEs together to a miniCQE.
But the miniCQE size is only 8 bytes and this limits the ability
to successfully keep the compression session in case of various
traffic patterns.

The current miniCQE format only keeps the compression session alive
in case of uniform traffic with the Hash RSS as the only difference.
There are requests to keep the compression session in case of tagged
traffic by RTE Flow Mark Id and mixed UDP/TCP and IPv4/IPv6 traffic.
Add 2 new miniCQE formats in order to achieve the best performance
for these traffic patterns: Flow Tag and Packet Header miniCQEs.

The existing rxq_cqe_comp_en devarg is modified to specify the
desired miniCQE format. Specifying 2 selects Flow Tag format
for better compression rate in case of RTE Flow Mark traffic.
Specifying 3 selects Checksum format (existing format for MPRQ).
Specifying 4 selects L3/L4 Header format for better compression
rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:07 +01:00
Xueming Li
9fbe97f0ce net/mlx5: remove shared context lock
To support multi-thread flow insertion, this patch removes shared data
lock since all resources should support concurrent protection.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:35:05 +01:00
Matan Azrad
86b59a1af6 net/mlx5: support VLAN matching fields
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by
patch [1].

Using these fields, the application can match all the VLAN options by
single flow: any, VLAN only and non-VLAN only.

Add the support for the fields.
By the way, add the support for QinQ packets matching.

VLAN\QinQ limitations are listed in the driver document.

[1] https://patches.dpdk.org/patch/80965/

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 23:35:04 +01:00
Bing Zhao
fea928802d doc: update hairpin support in mlx5 guide
Hairpin between two ports will be supported by mlx5 PMD.

The supported scenarios and limitations are listed in "mlx5.rst".

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:04 +01:00