The KNI PMD name should be "net_kni".
Fixes: 75e2bc54c018 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Users can create the desired number of RxQ and TxQ in DPDK. For
example, if the number of RxQ = 2 and the number of TxQ = 5,
a total of 8 file descriptors will be created for a tap device,
including RxQ, TxQ, and one for keepalive. The RxQ and TxQ
with the same ID are paired by dup(2).
In this scenario, Kernel will have 3 RxQ where packets are
incoming but not read. The reason for this is that there are only
2 RxQ that are polled by DPDK, while there are 5 queues in Kernel.
This patch add a checking if DPDK has appropriate numbers of
queues to avoid unexpected packet drop.
Signed-off-by: Nobuhiro Miki <nmiki@yahoo-corp.jp>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
If hardware supports IEEE 1588 PTP, PTP capability will be set.
Currently, vec and sve burst is unsupported when PTP capability is set.
For sake of Rx/Tx performance, IEEE 1588 PTP is not supported in sve or
vec burst mode. When enabling IEEE 1588 PTP, Rx/Tx burst mode should be
simple or common. Rx/Tx burst mode could be set like this, for example:
-a 0000:35:00.0,rx_func_hint=common,tx_func_hint=common
This patch supports vec and sve burst when PTP is disabled. And only
support simple or common burst When PTP is enabled.
Fixes: 38b539d96eb6 ("net/hns3: support IEEE 1588 PTP")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Adding changes to configure switch header type pre_l2 for cnxk.
pre_l2 headers are custom headers placed before the ethernet
header. Along with switch header type, user needs to provide the
offset within the custom header that holds the size of the
custom header and mask for the size within the size offset.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Satheesh Paul <psatheesh@marvell.com>
Provides ethdev callback support of rx_queue_count,
rx_descriptor_status and tx_descriptor_status.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Recent VIC models can parse GENEVE, including options, and inner
packet headers. Enable GENEVE header and option flow items. Currently,
only the first option that follows the GENEVE header can be matched,
and the GENEVE header item must specify option length.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
This patch adds support for level 2 for QoS shaping.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Add missing capability for outer UDP Tx checksum.
Also fixed the feature list in ice_dcf.ini
Fixes: bf89db4409bb ("net/ice: complete device info get in DCF")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules on SDP interfaces.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. Old release notes and
the kernel related documentation is not accounted for this change.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
The doc's contain references to pmd but the proper use is to use PMD.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Removing the use of driver following PMD as its unnecessary.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
ENAv2 device requires write combining support which isn't supported by
the upstream vfio-pci. amzn-driver repository provided non-upstream
patch to enable this feature and it was linked directly by the ENA PMD
guide.
To avoid custom kernel patch linking, the user is now guided to the AWS
ENA PMD documentation, which describes vfio-pci and ENAv2 issue more
deeply with possible workarounds on how to resolve it.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update the software dependency link because of website shutdown.
Netcope Technologies was recently renamed to Magmio and no longer
provides packages and support for the FPGA cards and NDK platform.
However the project Liberouter@CESNET continues with the maintenance
of Network Development Kit and cooperates on the development of high
speed network FPGA cards as well.
Signed-off-by: Martin Spinler <spinler@cesnet.cz>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Users are confused with a feature with "P", added necessary
explanation for this.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
These edits affect the outermost header in the current processing state
of the packet, which might have been decapsulated by prior action DECAP.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Currently the vector and simple xmit algorithm don't support multi_segs,
so if Tx offload support MBUF_FAST_FREE, driver could invoke
rte_mempool_put_bulk() to free Tx mbufs in this situation.
In the testpmd single core MAC forwarding scenario, the performance is
improved by 8% at 64B on Kunpeng920 platform.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Remove the szedata2 device driver as the platform is no longer
supported.
Signed-off-by: Martin Spinler <spinler@cesnet.cz>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The ConnectX NIC series hardware provides only 63-bit
wide timestamps. The imposed limitations description
added to documentation.
At the moment there are no affected applications known
or bug reports neither, this is just the declaration
of limitation.
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
These actions map to MAE action DECR_IP_TTL. It affects
the outermost header in the current processing state of
the packet, which might have been decapsulated by prior
action DECAP. It also updates IPv4 checksum accordingly.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
The delay drop is the common feature managed on per device basis
and the kernel driver is responsible one for the initialization and
rearming.
By default, the timeout value is set to activate the delay drop when
the driver is loaded.
A private flag "dropless_rq" is used to control the rearming. Only
when it is on, the rearming will be handled once received a timeout
event. Or else, the delay drop will be deactivated after the first
timeout occurs and all the Rx queues won't have this feature.
The PMD is trying to query this flag and warn the application when
some queues are created with delay drop but the flag is off.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
For the Ethernet RQs, if there all receiving descriptors are
exhausted, the packets being received will be dropped. This behavior
prevents slow or malicious software entities at the host from
affecting the network. While for hairpin cases, even if there is no
software involved during the packet forwarding from Rx to Tx side,
some hiccup in the hardware or back pressure from Tx side may still
cause the descriptors to be exhausted. In certain scenarios it may be
preferred to configure the device to avoid such packet drops,
assuming the posting of descriptors will resume shortly.
To support this, a new devarg "delay_drop" is introduced. By default,
the delay drop is enabled for hairpin Rx queues and disabled for
standard Rx queues. This value is used as a bit mask:
- bit 0: enablement of standard Rx queue
- bit 1: enablement of hairpin Rx queue
And this attribute will be applied to all Rx queues of a device.
The "rq_delay_drop" capability in the HCA_CAP is checked before
creating any queue. If the hardware capabilities do not support
this delay drop, all the Rx queues will still be created without
this attribute, and the devarg setting will be ignored even if it
is specified explicitly. A warning log is used to notify the
application when this occurs.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Set the 'present' parameter to 0 by default. It is configured by hardware,
users can set it to 1 for manual configuration.
Fixes: f611dada1af8 ("net/txgbe: update link setup process of backplane NICs")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
This patch introduces shared RxQ. All shared Rx queues with same group
and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.
Shared rxq_data is set into device Rx queues of all member ports as
RxQ object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.
Flex item API adds FLEX flow item to RTE flows.
Fixes: dc4d860e8a89 ("ethdev: introduce configurable flexible item")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The GTP, GTP-U, GTP-C header fields can be matched, however NIC does not
support GTP tunneling so no items after the GTP header can be specified.
If a GTP-U or GTP-C item is specified without a preceding UDP item, the
UDP destination port is implicitly matched. For GTP, the destination UDP
port must be specified but its value is not enforced.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Protocol agnostic flow offloading in Flow Director is enabled by this
patch based on the Parser Library, using existing rte_flow raw API.
Note that the raw flow requires:
1. byte string of raw target packet bits.
2. byte string of mask of target packet.
Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:
flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end
Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.
Also update the ice feature list with rte_flow item raw.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT.
The former should be used instead of ambiguous PORT_ID.
The latter sends traffic to the entity represented by
the given ethdev (network port or VF).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for item REPRESENTED_PORT to match on traffic entering
the embedded switch from the entity represented by the given
ethdev (network port or VF).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
A meter policy with RSS/Queue action is not supported
when dv_xmeta_en enabled.
When dv_xmeta_en enabled in legacy creating flow,
it will split into two flows
(one set_tag with jump flow and one RSS/queue action flow).
For meter policy as termination table,
it cannot split flow and
cannot support when dv_xmeta_en enabled.
Fixes: 51ec04dc7bcf ("net/mlx5: connect meter policy to created flows")
Cc: stable@dpdk.org
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add MAC addresses to filter incoming packets, support to set
multicast addresses to filter. And support to set unicast table array.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>