The data-path code doesn't take care on 'rxq_cqe_pad_en' and use padded
CQE for any case when the system cache-line size is 128B.
This makes the argument redundant.
Remove it.
Fixes: bc91e8db12 ("net/mlx5: add 128B padding of Rx completion entry")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Adding support to parse 24B custom L2 header. Added devargs support to
configure the PKIND, and removed the restriction to support custom
headers on non SDP interface.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The command here does not create a queue region, but only sets the
lookup table, so the descriptions in the doc is not exact.
Fixes: feaae285b3 ("net/i40e: support hash configuration in RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add NEON vectorized path selection logic. Default setting comes from
vectorized devarg, then checks each criteria.
Packed ring vectorized neon path need:
NEON is supported by compiler and host
VERSION_1 and IN_ORDER features are negotiated
mergeable feature is not negotiated
LRO offloading is disabled
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
As announced the deprecation notice during the 20.11 release,
remove support for NetXtreme devices belonging to BCM573xx and
BCM5740x families. Specifically the support for the following Broadcom
PCI device IDs: 0x16c8, 0x16c9, 0x16ca, 0x16ce, 0x16cf, 0x16df, 0x16d0,
0x16d1, 0x16d2, 0x16d4, 0x16d5, 0x16e7, 0x16e8, 0x16e9 has been removed.
Deprecation notice has been removed and release notes for 21.02 has
been updated accordingly.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
With pkg-config support available within musdk library
(from musdk-release-SDK-10.3.5.0-PR2 version),
meson option 'lib_musdk_dir' can be removed.
PKG_CONFIG_PATH environment variable should be set appropriately
to use the musdk library.
docs are updated with new musdk version and meson instructions.
Signed-off-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The UNMAINTAINED flag will be removed in a future patch.
Signed-off-by: Andrew Boyer <aboyer@pensando.io>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added VLAN support for AMD XGBE driver
Adding below APIs for axgbe
- axgbe_enable_rx_vlan_stripping: to enable vlan header stripping
- axgbe_disable_rx_vlan_stripping: to disable vlan header stripping
- axgbe_enable_rx_vlan_filtering: to enable vlan filter mode
- axgbe_disable_rx_vlan_filtering: to disable vlan filter mode
- axgbe_update_vlan_hash_table: crc calculation and hash table update
based on vlan values post filter enable
- axgbe_vlan_filter_set: setting of active vlan out of max 4K values
before doing hash update of same
- axgbe_vlan_tpid_set: setting of default tpid values
- axgbe_vlan_offload_set: a top layer function to call strip/filter etc
based on mask values
Signed-off-by: Girish Nandibasappa <girish.nandibasappa@amd.com>
Acked-by: Somalapuram Amaranath <asomalap@amd.com>
PMD can support vector mode when jumbo is enabled as long as MTU is not
large enough to require scattered RX (which also depends on the mbuf size).
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In DPDK 20.11 the following offload features are added:
* Buffer Split
* Sampling
* Tunnel offload
* 2-port hairpin
* RSS shared action
* Age shared action
Update the relevant tables with OFED/rdma-core/NIC versions.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
The mlx5 PMD supports various Rx burst functions.
Each function is enabled differently and supports different features.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Since the hns3 NIC hardware features are not counted
and it is supported in fact. Besides, the flow director
is not supported and need to delete it.
Fixes: fa29fe45a7 ("net/hns3: support queue start and stop")
Fixes: 521ab3e933 ("net/hns3: add simple Rx path")
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Fixes: 936eda25e8 ("net/hns3: support dump register")
Fixes: 53b9f2b9a5 ("doc: update feature list in hns3 guide")
Cc: stable@dpdk.org
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently TRUFLOW is supported only on Whitney+ and Stingray devices.
Update the PMD doc with this info.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Remove list of supported OS in PMD specific doc.
Documenting an unsupported version of OS makes more sense in
PMD specific docs.
Platforms tested with this device is documented in release notes anyway.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
AF_XDP will not work on 32-bit kernels before version 5.4.
Document this restriction in the driver guide.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
The changeset that introduced common flow API thread safety
in fact introduced double locking to this particular PMD as
RTE flow API implementation in the PMD has been thread-safe
since the day zero. State this by setting the corresponding
device flag to skip locking imposed by generic RTE flow API.
Fixes: 80d1a9aff7 ("ethdev: make flow API thread safe")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The words blacklist and whitelist are avoided in text
about MAC filtering or kernel module.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Replace -w / --pci-whitelist with -a / --allow options
and --pci-blacklist with --block.
The -b short option remains unchanged.
Allow the old options for now, but print a nag
warning since old options are deprecated.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The hyperlink in the IGC documentation showed the whole link in italics
and was not clickable. This is now fixed to have a clickable label.
Fixes: 66fde1b943 ("net/igc: add skeleton")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
CQE compression allows us to save the PCI bandwidth and improve
the performance by compressing several CQEs together to a miniCQE.
But the miniCQE size is only 8 bytes and this limits the ability
to successfully keep the compression session in case of various
traffic patterns.
The current miniCQE format only keeps the compression session alive
in case of uniform traffic with the Hash RSS as the only difference.
There are requests to keep the compression session in case of tagged
traffic by RTE Flow Mark Id and mixed UDP/TCP and IPv4/IPv6 traffic.
Add 2 new miniCQE formats in order to achieve the best performance
for these traffic patterns: Flow Tag and Packet Header miniCQEs.
The existing rxq_cqe_comp_en devarg is modified to specify the
desired miniCQE format. Specifying 2 selects Flow Tag format
for better compression rate in case of RTE Flow Mark traffic.
Specifying 3 selects Checksum format (existing format for MPRQ).
Specifying 4 selects L3/L4 Header format for better compression
rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When receiving packets, netvsp puts data in a buffer mapped through UIO.
Depending on packet size, netvsc may attach the buffer as an external
mbuf. This is not a problem if this mbuf is consumed in the application,
and the application can correctly read data out of an external mbuf.
However, there are two problems with data in an external mbuf.
1. Due to the limitation of the kernel UIO implementation, physical
address of this external buffer is not exposed to the user-mode. If
this mbuf is passed to another driver, the other driver is unable to
map this buffer to iova.
2. Some DPDK applications are not aware of external mbuf, and may bug
when they receive an mbuf with external buffer attached.
Introduce a driver parameter "rx_extmbuf_enable" to control if netvsc
should use external mbuf for receiving packets. The default value is 0.
(netvsc doesn't use external mbuf, it always allocates mbuf and copy
data to mbuf) A non-zero value tells netvsc to attach external buffers
to mbuf on receiving packets, thus avoid copying memory.
Signed-off-by: Long Li <longli@microsoft.com>
The values for Rx and Tx copy break should be tunable rather
than hard coded constants.
The rx_copybreak sets the threshold where the driver uses an
external mbuf to avoid having to copy data. Setting 0 for copybreak
will cause driver to always create an external mbuf. Setting
a value greater than the MTU would prevent it from ever making
an external mbuf and always copy. The default value is 256 (bytes).
Likewise the tx_copybreak sets the threshold where the driver
aggregates multiple small packets into one request. If tx_copybreak
is 0 then each packet goes as a VMBus request (no copying).
If tx_copybreak is set larger than the MTU, then all packets smaller
than the chunk size of the VMBus send buffer will be copied; larger
packets always have to go as a single direct request. The default
value is 512 (bytes).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Long Li <longli@microsoft.com>
The ARMv8 platform support was tested and works fine with the ENA PMD.
It can be used on the AWS a1.* and m6g.* instances.
The ARMv8 support in ENA is at least from v19.11, where the VFIO DPDK
driver was fixed to work with 32-bit applications compiled for arm.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
The ID 0xEC21 is not associated with LLQ feature of the device, so it
would be misleading for the user. Because of that, the current
identifier is more precise.
Together with code update, the documentation was changed to reflect
current changes
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Instead of FDIR filters RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of EtherType filter RTE flow API should be used.
Move corresponding definitions to ethdev internal driver API
since it is used by drivers internally.
Preserve RTE_ETH_FILTER_ETHERTYPE because of it as well.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
To support multi-thread flow insertion, this patch removes shared data
lock since all resources should support concurrent protection.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by
patch [1].
Using these fields, the application can match all the VLAN options by
single flow: any, VLAN only and non-VLAN only.
Add the support for the fields.
By the way, add the support for QinQ packets matching.
VLAN\QinQ limitations are listed in the driver document.
[1] https://patches.dpdk.org/patch/80965/
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Hairpin between two ports will be supported by mlx5 PMD.
The supported scenarios and limitations are listed in "mlx5.rst".
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
PMD supports thread-safe flow operations. Set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE dev_flag to indicate this info
to the application. rte_flow API functions can avoid using its
own mutex for safe multi-thread flow handling.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
VXLAN decap offload can happen in stages. The offload request may
not come as a single flow request rather may come as two flow offload
requests F1 & F2. This patch is adding support for this two stage
offload design. The match criteria for F1 is O_DMAC, O_SMAC,
O_DST_IP, O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match
criteria for F2 is O_SRC_IP, O_DST_IP, VNI and inner header fields.
F1 and F2 flow offload requests can come in any order. If F2 flow
offload request comes first then F2 can’t be offloaded as there is
no O_DMAC information in F2. In this case, F2 will be deferred until
F1 flow offload request arrives. When F1 flow offload request is
received it will have O_DMAC information. Using F1’s O_DMAC, driver
creates an L2 context entry in the hardware as part of offloading F1.
F2 will now use F1’s O_DMAC to get the L2 context id associated with
this O_DMAC and other flow fields that are cached already at the time
of deferring F2 for offloading. F2s that arrive after F1 is offloaded
will be directly programmed and not cached.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Supports check the status of Rx and Tx descriptors.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The buffer split feature is mentioned in the mlx5 PMD
documentation, the limitation is description is added
as well.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
- apply matches to both outer and inner packet headers
during entire offload procedure;
- restore outer header of partially offloaded packet;
- model is implemented as a set of helper functions.
Implementation details:
* tunnel_offload PMD parameter must be set to 1 to enable the feature.
* application cannot use MARK and META flow actions with tunnel.
* offload JUMP action is restricted to steering tunnel rule only.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Mark "BSD nic_uio", "Usage doc", and "Perf doc" as supported
for the bnxt PMD.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>