Commit Graph

30065 Commits

Author SHA1 Message Date
Ferruh Yigit
675a6c1874 net/softnic: fix useless address check
Reported by "gcc (GCC) 12.0.0 20211003 (experimental)":

./drivers/net/softnic/rte_eth_softnic_cli.c:
	In function ‘tmgr_hierarchy_default’:
./drivers/net/softnic/rte_eth_softnic_cli.c:634:73:
	error: the comparison will always evaluate as ‘true’ for the
	address of ‘tc_valid’ will never be NULL [-Werror=address]
  634 | (&params->shared_shaper_id.tc_valid[0]) ? 1 : 0,
      |                                         ^

Fixing it by removing useless check.

Fixes: 1af2dc5111 ("net/softnic: add command for default tmgr hierarchy")
Fixes: 5eb676d74f ("net/softnic: add config flexibility to TM")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
2021-10-13 23:37:17 +02:00
Andrew Rybchenko
f55b61cec9 net/sfc: support port representor flow item
Add support for item PORT_REPRESENTOR which should
be used instead of ambiguous item PORT_ID.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Andrew Rybchenko
8d13351d4c net/octeontx2: support port representor flow action
Action PORT_ID implementation assumes ingress only. Its semantics
suggests that support for equal action PORT_REPRESENTOR be added.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Andrew Rybchenko
d35dd287a2 net/mlx5: support represented port flow action
Semantics of the existing support for action PORT_ID suggests
that support for equal action REPRESENTED_PORT be implemented.

Helper functions keep port_id suffix since action
MLX5_FLOW_ACTION_PORT_ID is still used internally.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Andrew Rybchenko
54bd4ebe8b net/enic: support meta flow actions to overrule destinations
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for action PORT_ID.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-13 22:59:26 +02:00
Andrew Rybchenko
640b44aa5c net/bnxt: support meta flow actions to overrule destinations
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for action PORT_ID.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Andrew Rybchenko
a8321e0979 net/bnxt: support meta flow items to match on traffic source
Add support for items PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for item PORT_ID.

The use of item PORT_ID depends on the specified direction attribute.
Items PORT_REPRESENTOR and REPRESENTED_PORT, in turn, define traffic
direction themselves. The former matches traffic from the driver's
vNIC. The latter matches packets from either a v-port (network) or
a VF's vNIC (if the driver's port is a VF representor).

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Ivan Malov
9d2a349b38 ethdev: deprecate direction attributes in transfer flows
Attributes "ingress" and "egress" can only apply unambiguosly
to non-"transfer" flows. In "transfer" flows, the standpoint
is effectively shifted to the embedded switch. There can be
many different endpoints connected to the switch, so the
use of "ingress" / "egress" does not shed light on which
endpoints precisely can be considered as traffic sources.

Add relevant deprecation notices and suggest the use of precise
traffic source items (PORT_REPRESENTOR and REPRESENTED_PORT).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-13 22:59:26 +02:00
Ivan Malov
5da44faa80 ethdev: deprecate hard-to-use or ambiguous items and actions
PF, VF and PHY_PORT require that applications have extra
knowledge of the underlying NIC and thus are hard to use.
Also, the corresponding items depend on the direction
attribute (ingress / egress), which complicates their
use in applications and interpretation in PMDs.

The concept of PORT_ID is ambiguous as it doesn't say whether
the port in question is an ethdev or the represented entity.

Items and actions PORT_REPRESENTOR, REPRESENTED_PORT
should be used instead.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Ivan Malov
88caad251c ethdev: add represented port action to flow API
For use in "transfer" flows. Supposed to send matching traffic to the
entity represented by the given ethdev, at embedded switch level.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Ivan Malov
8edb6bc026 ethdev: add port representor action to flow API
For use in "transfer" flows. Supposed to send matching traffic to
the given ethdev (to the application), at embedded switch level.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Ivan Malov
49863ae2bf ethdev: add represented port item to flow API
For use in "transfer" flows. Supposed to match traffic entering the
embedded switch from the entity represented by the given ethdev.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.

Must not be combined with direction attributes.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00
Ivan Malov
081e42dab1 ethdev: add port representor item to flow API
For use in "transfer" flows. Supposed to match traffic
entering the embedded switch from the given ethdev.

Must not be combined with direction attributes.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:25 +02:00
Konstantin Ananyev
f9bdee267a ethdev: hide internal structures
Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:59 +02:00
Konstantin Ananyev
27a300e6af ethdev: add API to retrieve multiple MAC addresses
Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
addresses assigned to given port.
Change testpmd to use this new function and avoid referencing directly
rte_eth_devices[].

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:59 +02:00
Konstantin Ananyev
7a0935239b ethdev: make fast-path functions to use new flat array
Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:58 +02:00
Konstantin Ananyev
c87d435a4d ethdev: copy fast-path API into separate structure
Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for fast-path calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The whole idea beyond this new schema:
1. PMDs keep to setup fast-path function pointers and related data
   inside rte_eth_dev struct in the same way they did it before.
2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
   (for secondary process) we call eth_dev_fp_ops_setup, which
   copies these function and data pointers into rte_eth_fp_ops[port_id].
3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
   we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
   into some dummy values.
4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
   flat array to call PMD specific functions.
That approach should allow us to make rte_eth_devices[] private
without introducing regression and help to avoid changes in drivers code.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:58 +02:00
Konstantin Ananyev
8d7d4fcdca ethdev: change input parameters for Rx queue count
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:58 +02:00
Konstantin Ananyev
c024496ae8 ethdev: allocate max space for internal queue array
At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.
It means that from now on, each ethdev port will always consume:
((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
bytes of memory for its queue pointers.
With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:58 +02:00
Ivan Malov
0ead098383 net/sfc: refine pattern of group flows in tunnel offload
By design, in a GROUP flow, outer match criteria go to "ENC" fields
of the action rule match specification. The current HW/FW hasn't
got support for these fields (except the VXLAN VNI) yet.

As a workaround, start parsing the pattern from the tunnel item.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
9df2d8f5cc net/sfc: support counters in tunnel offload jump rules
Such a counter will only report the number of hits, which is actually
a sum of two contributions (the JUMP rule's own counter + indirect
increments issued by counters of the associated GROUP rules.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
8cd7725169 net/sfc: use action rules in tunnel offload jump rules
By design, JUMP flows should be represented solely by the outer rules. But
the HW/FW hasn't got support for setting Rx mark from RECIRC_ID on outer
rule lookup yet. Neither does it support outer rule counters. As a
workaround, an action rule of lower priority is used to do the job.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
8efb2f537e net/sfc: override match fields in tunnel offload jump rules
The current HW/FW doesn't allow to match on MAC addresses in outer rules.
One day this will change for sure, but right now a workaround is needed.

Match on VLAN presence in outer rules is also unsupported. Ignore it.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
7e5b479803 net/sfc: implement control path operations in tunnel offload
Support generic callbacks which callers will invoke to get
PMD-specific actions and items used to produce JUMP and
GROUP flows and to detect tunnel information.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
012bf708c2 net/sfc: support group flows in tunnel offload
GROUP is an in-house term for so-called "tunnel_match" flows.
On parsing, they are detected by virtue of PMD-internal item
MARK. It associates a given flow with its tunnel context.

Such a flow is represented by a MAE action rule which is
chained with the corresponding JUMP rule's outer rule
by virtue of matching on its recirculation ID.

GROUP flows do narrower match than JUMP flows do and
decapsulate matching packets (full offload).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
3a73dcfdb2 common/sfc_efx/base: match on recirc ID in action rules
Currently, there is an API for setting recirculation ID in
outer rules. Add an API to let action rules match on it.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
93de39f50a net/sfc: support jump flows in tunnel offload
JUMP is an in-house term for so-called "tunnel_set" flows. On parsing,
they are identified by virtue of actions MARK (PMD-internal) and JUMP.
The action MARK associates a given flow with its tunnel context.

Such a flow is represented by a MAE outer rule (OR) which has its
recirculation ID set. This ID is also associated with the tunnel
context. The OR is supposed to set this ID in 8 high bits of
Rx mark in matching packets. It also counts the packets.

Packets that hit the OR but miss in action rule (AR) table,
should go to MAE admin PF (that is, to DPDK) by default.

Support for the use of action COUNT in JUMP
flows will be introduced by later patches.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 21:30:13 +02:00
Ivan Malov
5cf153e79c common/sfc_efx/base: support recirculation ID in outer rules
When an outer rule is hit, it can pass recirculation ID down
to action rule lookup, and action rules can match on this ID
instead of matching on the outer rule allocation handle.
By default, recirculation ID is assumed to be zero.

Add an API to set recirculation ID in outer rules.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
2021-10-13 16:39:11 +02:00
Ivan Malov
53a8051264 net/sfc: fence off 8 bits in Rx mark for tunnel offload
Later patches add support for tunnel offload on Riverhead (EF100).
A board can host at most 254 tunnels. Partially offloaded (missed)
tunnel packets are identified by virtue of 8 high bits in Rx mark.

Add basic definitions of the upcoming tunnel offload support and
take care of the dedicated bits in Rx mark across the driver.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2021-10-13 16:39:11 +02:00
Hyong Youb Kim
fb92745461 net/enic: fix filter mode detection
vnic_dev_capable_filter_mode() currently fails when
CMD_CAPABILITY(CMD_ADD_FILTER) returns ERR_EPERM. In turn, this
failure causes the driver initialization to fail.

But, firmware may legitimately return ERR_EPERM. For example, VF vNIC
returns ERR_EPERM when it does not support filtering at all. So, treat
ERR_EPERM as "no filtering available" instead of an unexpected error.

Fixes: 322b355f21 ("net/enic/base: bring NIC interface functions up to date")
Cc: stable@dpdk.org

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2021-10-13 15:40:50 +02:00
Chengwen Feng
f93819cf5a net/hns3: fix interrupt vector freeing
The intr_handle->intr_vec is allocated by rte_zmalloc(), but freed by
free(), this patch fixes it.

Fixes: 02a7b55657 ("net/hns3: support Rx interrupt")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-13 13:59:13 +02:00
Ivan Malov
bf38764acc net/sfc: report user flag on EF100 native datapath
Detect the flag in Rx prefix and pass it to users.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2021-10-13 00:47:49 +02:00
Ivan Malov
cdea571bec common/sfc_efx/base: add flag to use Rx prefix user flag
Add an RxQ flag to request support for user flag field of Rx
prefix. The feature is supported only on EF100 and EF10 ESSB.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2021-10-13 00:47:49 +02:00
Ivan Malov
a9cc128cb9 net/sfc: support flow mark delivery on EF100 native datapath
MAE counter engine gets generation counts by virtue of the mark,
so the code to extract the field is already in place, but flow
action MARK doesn't benefit from it. Support this use case, too.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2021-10-13 00:47:49 +02:00
Ivan Malov
9b14dc7461 net/sfc: support API to negotiate delivery of Rx metadata
Initial support for the method. Later patches will extend it to
make FLAG and MARK delivery available on EF100 native datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2021-10-13 00:47:49 +02:00
Ivan Malov
f6d8a6d3fa ethdev: negotiate delivery of packet metadata from HW to PMD
Provide an API to let the application control the NIC's ability
to deliver specific kinds of per-packet metadata to the PMD.

Checks for the NIC's ability to set these kinds of metadata
in the first place (support for the flow actions) belong in
flow API responsibility domain (flow validate mechanism).
This topic is out of scope of the new API in question.

The PMD's ability to deliver received metadata to the user
by virtue of mbuf fields should be covered by mbuf library.
It is also out of scope of the new API in question.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
2021-10-13 00:47:42 +02:00
Ajit Khaparde
239695f754 net/bnxt: enhance RSS action support
Enhance support for RSS action in the non-TruFlow path.
This will allow the user or application to update the RSS settings
using RTE_FLOW API.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
2021-10-12 22:36:10 +02:00
Ajit Khaparde
6132d35512 net/bnxt: fix Rx queue state on start
Fix Rx queue state on device start.
The state of Rx queues could be incorrect in some cases
because instead of updating the state for all the Rx queues,
we are updating it for queues in a VNIC.

Fixes: 0105ea1296 ("net/bnxt: support runtime queue setup")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
2021-10-12 22:36:03 +02:00
Ajit Khaparde
657c2a7f1d net/bnxt: create aggregation rings when needed
Aggregation rings are needed when PMD needs to support jumbo frames, LRO.
Currently we are creating the aggregation rings whether jumbo frames or
LRO has been enabled or disabled. This causes unnecessary allocation of
mbufs needing larger mbuf pool which is not used at all.

This patch modifies the code to create aggregation rings only when
needed.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-12 22:35:13 +02:00
Tal Shnaiderman
c8834a3663 net/mlx5: support keeping CRC on Windows
Support of the keep-CRC offloading by checking
the relevant FW capability (scatter_fcs) for NIC support.

Supported offload:

DEV_RX_OFFLOAD_KEEP_CRC

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:39 +02:00
Tal Shnaiderman
58a95badbd common/mlx5: read FCS scattering capability from DevX
mlx5 in Windows needs the hca capability scatter_fcs
to query the NIC support for the CRC keeping offload.

Added the capability as part of the capabilities
queried by the PMD using DevX.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:38 +02:00
Tal Shnaiderman
6061cc4148 net/mlx5: support VLAN stripping offload on Windows
Support of the VLAN stripping offloading by checking
the relevant FW capability (vlan_cap) for NIC support.

Supported offload:

DEV_RX_OFFLOAD_VLAN_STRIP

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:38 +02:00
Tal Shnaiderman
3440836d68 common/mlx5: read VLAN capability from DevX
mlx5 in Windows needs the hca capability vlan_cap
to query the NIC for VLAN stripping support

Added the capability as part of the capabilities
queried by the PMD using DevX.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:37 +02:00
Tal Shnaiderman
738da9a867 net/mlx5: support TSO offload on Windows
Support of the TSO offloading by checking
the relevant FW capability for NIC support.

Supported offloads:

DEV_TX_OFFLOAD_TCP_TSO
DEV_TX_OFFLOAD_VXLAN_TNL_TSO
DEV_TX_OFFLOAD_GRE_TNL_TSO
DEV_TX_OFFLOAD_GENEVE_TNL_TSO

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:37 +02:00
Tal Shnaiderman
d338df9969 common/mlx5: read TSO capability from DevX
mlx5 in Windows needs the hca capability max_lso_cap
to query the NIC for TSO offloading support.

Added the capability as part of the capabilities
queried by the PMD using DevX.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:36 +02:00
Tal Shnaiderman
6a86ee2e6d net/mlx5: query tunneling support on Windows
Query tunneling supported on the NIC.

Save the offloads values in a config parameter.
This is needed for the following TSO support:

DEV_TX_OFFLOAD_VXLAN_TNL_TSO
DEV_TX_OFFLOAD_GRE_TNL_TSO
DEV_TX_OFFLOAD_GENEVE_TNL_TSO

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:36 +02:00
Tal Shnaiderman
cf9b3c1bbc common/mlx5: read tunneling capabilities from DevX
mlx5 in Windows needs the tunneling hca capabilities
to query the NIC for Inner TSO offloading support.

Added the capability as part of the capabilities
queried by the PMD using DevX.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:35 +02:00
Tal Shnaiderman
c1a320bf89 net/mlx5: fix tunneling support query
Currently, the PMD decides if the tunneling offload
can enable VXLAN/GRE/GENEVE tunneled TSO support by checking
config->tunnel_en (single bit) and config->tso.

This is incorrect, the right way is to check the following
flags returned by the mlx5dv_query_device function:

MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_VXLAN - if supported the offload
DEV_TX_OFFLOAD_VXLAN_TNL_TSO can be enabled.
MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GRE - if supported the offload
DEV_TX_OFFLOAD_GRE_TNL_TSO can be enabled.
MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GENEVE - if supported the offload
DEV_TX_OFFLOAD_GENEVE_TNL_TSO can be enabled.

The fix enables the offloads according to the correct
flags returned by the kernel.

Fixes: dbccb4cddc ("net/mlx5: convert to new Tx offloads API")
Cc: stable@dpdk.org

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:34 +02:00
Tal Shnaiderman
d47fe9dabc net/mlx5: query software parsing support on Windows
Query software parsing supported on the NIC.

Save the offloads values in a config parameter.
This is needed for the outer IPv4 checksum and
IP and UDP tunneled packet TSO support.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:34 +02:00
Tal Shnaiderman
643e4db076 common/mlx5: read software parsing capabilities from DevX
mlx5 in Windows needs the software parsing hca capabilities
to query the NIC for TSO and Checksum offloading support.

Added the capability as part of the capabilities
queried by the PMD using DevX.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
2021-10-12 15:29:33 +02:00