This patch adds support for level 2 for QoS shaping.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Add missing capability for outer UDP Tx checksum.
Also fixed the feature list in ice_dcf.ini
Fixes: bf89db4409 ("net/ice: complete device info get in DCF")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules on SDP interfaces.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. Old release notes and
the kernel related documentation is not accounted for this change.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
The doc's contain references to pmd but the proper use is to use PMD.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Removing the use of driver following PMD as its unnecessary.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
ENAv2 device requires write combining support which isn't supported by
the upstream vfio-pci. amzn-driver repository provided non-upstream
patch to enable this feature and it was linked directly by the ENA PMD
guide.
To avoid custom kernel patch linking, the user is now guided to the AWS
ENA PMD documentation, which describes vfio-pci and ENAv2 issue more
deeply with possible workarounds on how to resolve it.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update the software dependency link because of website shutdown.
Netcope Technologies was recently renamed to Magmio and no longer
provides packages and support for the FPGA cards and NDK platform.
However the project Liberouter@CESNET continues with the maintenance
of Network Development Kit and cooperates on the development of high
speed network FPGA cards as well.
Signed-off-by: Martin Spinler <spinler@cesnet.cz>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Users are confused with a feature with "P", added necessary
explanation for this.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
These edits affect the outermost header in the current processing state
of the packet, which might have been decapsulated by prior action DECAP.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Currently the vector and simple xmit algorithm don't support multi_segs,
so if Tx offload support MBUF_FAST_FREE, driver could invoke
rte_mempool_put_bulk() to free Tx mbufs in this situation.
In the testpmd single core MAC forwarding scenario, the performance is
improved by 8% at 64B on Kunpeng920 platform.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Remove the szedata2 device driver as the platform is no longer
supported.
Signed-off-by: Martin Spinler <spinler@cesnet.cz>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The ConnectX NIC series hardware provides only 63-bit
wide timestamps. The imposed limitations description
added to documentation.
At the moment there are no affected applications known
or bug reports neither, this is just the declaration
of limitation.
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
These actions map to MAE action DECR_IP_TTL. It affects
the outermost header in the current processing state of
the packet, which might have been decapsulated by prior
action DECAP. It also updates IPv4 checksum accordingly.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
The delay drop is the common feature managed on per device basis
and the kernel driver is responsible one for the initialization and
rearming.
By default, the timeout value is set to activate the delay drop when
the driver is loaded.
A private flag "dropless_rq" is used to control the rearming. Only
when it is on, the rearming will be handled once received a timeout
event. Or else, the delay drop will be deactivated after the first
timeout occurs and all the Rx queues won't have this feature.
The PMD is trying to query this flag and warn the application when
some queues are created with delay drop but the flag is off.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
For the Ethernet RQs, if there all receiving descriptors are
exhausted, the packets being received will be dropped. This behavior
prevents slow or malicious software entities at the host from
affecting the network. While for hairpin cases, even if there is no
software involved during the packet forwarding from Rx to Tx side,
some hiccup in the hardware or back pressure from Tx side may still
cause the descriptors to be exhausted. In certain scenarios it may be
preferred to configure the device to avoid such packet drops,
assuming the posting of descriptors will resume shortly.
To support this, a new devarg "delay_drop" is introduced. By default,
the delay drop is enabled for hairpin Rx queues and disabled for
standard Rx queues. This value is used as a bit mask:
- bit 0: enablement of standard Rx queue
- bit 1: enablement of hairpin Rx queue
And this attribute will be applied to all Rx queues of a device.
The "rq_delay_drop" capability in the HCA_CAP is checked before
creating any queue. If the hardware capabilities do not support
this delay drop, all the Rx queues will still be created without
this attribute, and the devarg setting will be ignored even if it
is specified explicitly. A warning log is used to notify the
application when this occurs.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Set the 'present' parameter to 0 by default. It is configured by hardware,
users can set it to 1 for manual configuration.
Fixes: f611dada1a ("net/txgbe: update link setup process of backplane NICs")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
This patch introduces shared RxQ. All shared Rx queues with same group
and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.
Shared rxq_data is set into device Rx queues of all member ports as
RxQ object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.
Flex item API adds FLEX flow item to RTE flows.
Fixes: dc4d860e8a ("ethdev: introduce configurable flexible item")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The GTP, GTP-U, GTP-C header fields can be matched, however NIC does not
support GTP tunneling so no items after the GTP header can be specified.
If a GTP-U or GTP-C item is specified without a preceding UDP item, the
UDP destination port is implicitly matched. For GTP, the destination UDP
port must be specified but its value is not enforced.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Protocol agnostic flow offloading in Flow Director is enabled by this
patch based on the Parser Library, using existing rte_flow raw API.
Note that the raw flow requires:
1. byte string of raw target packet bits.
2. byte string of mask of target packet.
Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:
flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end
Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.
Also update the ice feature list with rte_flow item raw.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT.
The former should be used instead of ambiguous PORT_ID.
The latter sends traffic to the entity represented by
the given ethdev (network port or VF).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for item REPRESENTED_PORT to match on traffic entering
the embedded switch from the entity represented by the given
ethdev (network port or VF).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
A meter policy with RSS/Queue action is not supported
when dv_xmeta_en enabled.
When dv_xmeta_en enabled in legacy creating flow,
it will split into two flows
(one set_tag with jump flow and one RSS/queue action flow).
For meter policy as termination table,
it cannot split flow and
cannot support when dv_xmeta_en enabled.
Fixes: 51ec04dc7b ("net/mlx5: connect meter policy to created flows")
Cc: stable@dpdk.org
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add MAC addresses to filter incoming packets, support to set
multicast addresses to filter. And support to set unicast table array.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Provide the capability to update the hash key, hash types
and RETA table on the fly (without needing to stop/start
the device). However, the key length and the number of RETA
entries are fixed to 40B and 128 entries respectively. This
is done in order to simplify the design, but may be
revisited later as the Virtio spec provides this
flexibility.
Note that only VIRTIO_NET_F_RSS support is implemented,
VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
packet RSS hash calculated by the device into mbuf.rss, is
not yet supported.
Regarding the default RSS configuration, it has been
chosen to use the default Intel ixgbe key as default key,
and default RETA is a simple modulo between the hash and
the number of Rx queues.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
host-based-truflow devarg is not used anymore to enable host based
flow table management functionality TruFlow. Instead this feature is
now driven by a capability indicated by the firmware.
TruFlow is not in tech preview anymore. Update the doc accordingly.
Fixes: da3731e2ea ("net/bnxt: check FW capability to support TRUFLOW")
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Support for runtime Rx/Tx queue setup and inner RSS is not updated.
Update feature matrix for bnxt PMD.
Fixes: 7ed45b1a7c ("net/bnxt: support RSS hash selection")
Fixes: 0105ea1296 ("net/bnxt: support runtime queue setup")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add support for inline crypto for IPsec, for ESP transport and
tunnel over IPv4 and IPv6, as well as supporting the offload for
ESP over UDP, and in conjunction with TSO for UDP and TCP flows.
Implement support for rte_security packet metadata
Add definition for IPsec descriptors, extend support for offload
in data and context descriptor to support
Add support to virtual channel mailbox for IPsec Crypto request
operations. IPsec Crypto requests receive an initial acknowledgment
from physical function driver of receipt of request and then an
asynchronous response with success/failure of request including any
response data.
Add enhanced descriptor debugging
Refactor of scalar tx burst function to support integration of offload
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
In socket direct mode, it's possible to bind any two (maybe four
in future) PCIe devices with IDs like xxxx:xx:xx.x and
yyyy:yy:yy.y. Bonding member interfaces are unnecessary to have
the same PCIe domain/bus/device ID anymore,
Kernel driver uses "system_image_guid" to identify if devices can
be bound together or not. Sysfs "phys_switch_id" is used to get
"system_image_guid" of each network interface.
OFED 5.4+ is required to support "phys_switch_id".
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Jumbo offload is no more announced as capability, and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag is removed.
This patch is also removing 'Jumbo frame' feature from documentation.
Fixes: b563c14212 ("ethdev: remove jumbo offload flag")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Current, the max waiting time for MBX response is 500ms, but in
some scenarios, it is not enough. Since it depends on the response
of the kernel mode driver, and its response time is related to the
scheduling of the system. In this special scenario, most of the
cores are isolated, and only a few cores are used for system
scheduling. When a large number of services are started, the
scheduling of the system will be very busy, and the reply of the
mbx message will time out, which will cause our PMD initialization
to fail.
This patch add a runtime config to set the max wait time. For the
above scenes, users can adjust the waiting time to a suitable value
by themselves.
Fixes: 463e748964 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.
This patch introduces shared Rx queue. Ports in same Rx domain and
switch domain could share Rx queue set by specifying non-zero sharing
group in Rx queue configuration.
Shared Rx queue is identified by share_rxq field of Rx queue
configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using
same shared Rx queue ID.
No special API is defined to receive packets from shared Rx queue.
Polling any member port of a shared Rx queue receives packets of that
queue for all member ports, port_id is identified by mbuf->port. PMD is
responsible to resolve shared Rx queue from device and queue data.
Shared Rx queue must be polled in same thread or core, polling a queue
ID of any member port is essentially same.
Multiple share groups are supported. PMD should support mixed
configuration by allowing multiple share groups and non-shared Rx queue
on one port.
Example grouping and polling model to reflect service priority:
Group1, 2 shared Rx queues per port: PF, rep0, rep1
Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
Core0: poll PF queue0
Core1: poll PF queue1
Core2: poll rep2 queue0
PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.
PMD is responsible for shared Rx queue consistency checks to avoid
member port's configuration contradict each other.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch adds support for rte flow action type port_id to
enable directing packets from an input port PF to an output
port which is a VF of the input port PF.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch enables building the e1000 driver for Windows.
I tested using two Windows VM on top of VMware Fusion,
creating two e1000 devices with device ID 0x10D3 (8274L),
verifying rx/tx works correctly using dpdk-testpmd.exe
rxonly and txonly mode.
Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
Tested-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tested-by: Pallavi Kadam <pallavi.kadam@intel.com>
Previously, we set txq affinity to 0 and let firmware
to perform round-robin when bonding. Firmware uses a
global counter to assign txq affinity to different
physical ports accord to remainder after division.
There are three dis-advantages:
1. The global counter is shared between kernel and dpdk.
2. After restarting pmd or port, the previous counter value
is reused, so the new affinity is unpredictable.
3. There is no way to get what affinity is set by firmware.
In this update, we will create several TISs up to the
number of bonding ports and bind each TIS to one PF port.
For each port, it will start to pick up TIS using its port
index. Upper layer application can quickly calculate each txq's
affinity without querying.
At DPDK layer, when creating txq with 2 bonding ports, the
affinity is set like:
port 0: 1-->2-->1-->2
port 1: 2-->1-->2-->1
port 2: 1-->2-->1-->2
Note: Only applicable to DevX api.
This affinity subjects to HW hash.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add support for PPP over L2TPv2 over UDP protocol RSS Hash based
on inner IP src/dst address and TCP/UDP src/dst port.
Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added flow pattern items and header formats of L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Meters are configured per flow using rte_flow_create API.
Implement support for meter action applied on the flow.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Implement API to validate meter policy for CNXK platform.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
To keep flow format uniform with ice, this patch adds support for
this RSS rule:
flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end \
actions rss types ipv6-frag end queues end queues end / end
Fixes: ef4c16fd91 ("net/i40e: refactor RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
The device name should be 82574L Gigabit Ethernet Controller.
The patch also remove a redundant "*".
Fixes: fc1f2750a3 ("doc: programmers guide")
Cc: stable@dpdk.org
Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Add support for item PORT_REPRESENTOR which should
be used instead of ambiguous item PORT_ID.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Action PORT_ID implementation assumes ingress only. Its semantics
suggests that support for equal action PORT_REPRESENTOR be added.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Semantics of the existing support for action PORT_ID suggests
that support for equal action REPRESENTED_PORT be implemented.
Helper functions keep port_id suffix since action
MLX5_FLOW_ACTION_PORT_ID is still used internally.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for action PORT_ID.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for action PORT_ID.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for items PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for item PORT_ID.
The use of item PORT_ID depends on the specified direction attribute.
Items PORT_REPRESENTOR and REPRESENTED_PORT, in turn, define traffic
direction themselves. The former matches traffic from the driver's
vNIC. The latter matches packets from either a v-port (network) or
a VF's vNIC (if the driver's port is a VF representor).
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to send matching traffic to the
entity represented by the given ethdev, at embedded switch level.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to send matching traffic to
the given ethdev (to the application), at embedded switch level.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic entering the
embedded switch from the entity represented by the given ethdev.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic
entering the embedded switch from the given ethdev.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
GROUP is an in-house term for so-called "tunnel_match" flows.
On parsing, they are detected by virtue of PMD-internal item
MARK. It associates a given flow with its tunnel context.
Such a flow is represented by a MAE action rule which is
chained with the corresponding JUMP rule's outer rule
by virtue of matching on its recirculation ID.
GROUP flows do narrower match than JUMP flows do and
decapsulate matching packets (full offload).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
JUMP is an in-house term for so-called "tunnel_set" flows. On parsing,
they are identified by virtue of actions MARK (PMD-internal) and JUMP.
The action MARK associates a given flow with its tunnel context.
Such a flow is represented by a MAE outer rule (OR) which has its
recirculation ID set. This ID is also associated with the tunnel
context. The OR is supposed to set this ID in 8 high bits of
Rx mark in matching packets. It also counts the packets.
Packets that hit the OR but miss in action rule (AR) table,
should go to MAE admin PF (that is, to DPDK) by default.
Support for the use of action COUNT in JUMP
flows will be introduced by later patches.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Provide minimal implementation for port representors that only can be
configured and can provide device information.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Add the argument that allows user to choose either switchdev or legacy
mode. Legacy mode enables switching by using Ethernet virtual bridging
(EVB) API. In switchdev mode, VF traffic goes via port representor
(if any) on PF, and software virtual switch (for example, Open vSwitch)
steers the traffic.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
rte_eth_rx_descriptor_status() should be used as a replacement.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.
Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a new devarg: mr_mempool_reg_en.
On TX slow path, i.e. when an MR key for the address of the buffer
to send is not in the local cache, first try to retrieve it from
the database of registered mempools. Supported are direct and indirect
mbufs, as well as externally-attached ones from MLX5 MPRQ feature.
Lookup in the database of non-mempool memory is used as the last resort.
RX mempools are registered regardless of the devarg value.
On RX data path only the local cache and the mempool database is used.
If implicit mempool registration is disabled, these mempools
are unregistered at port stop, releasing the MRs.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Fix spelling error which is causing reports of other patches failing.
Fixes: 69daa9e502 ("net/cnxk: support inline security setup for cn10k")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
The guides should be referenced locally with RST syntax
:doc: (beginning of page) or :ref: (specific chapter).
The links to doc.dpdk.org/guides/ are removed.
The links to the doc.dpdk.org/api/ are acceptable,
but should not point to a specific version, so one is fixed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: David Marchand <david.marchand@redhat.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.
This patch also changes dpdk-devbind.py to list new inline
device as misc device.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The af_packet pmd driver binds to a raw socket and allows sending and
receiving of packets through the kernel.
Since commit [1], the kernel strips the vlan tags early in
__netif_receive_skb_core(), so we receive untagged packets while running
with the af_packet pmd.
Luckily for us, the skb vlan-related fields are still populated from the
stripped vlan tags, so we end up having all the information that we need
in the mbuf.
Having the pmd driver support DEV_RX_OFFLOAD_VLAN_STRIP allows the
application to control the desired vlan stripping behavior, until we
have a way to describe offloads that can't be disabled by pmd drivers.
This patch will cause a change in the default way that the af_packet pmd
treats received vlan-tagged frames. While previously, the application
was required to check the PKT_RX_VLAN_STRIPPED flag, after this patch,
the pmd will re-insert the vlan tag transparently to the user, unless
the DEV_RX_OFFLOAD_VLAN_STRIP is enabled in rxmode.offloads.
I've attempted a preliminary benchmark to understand if the change could
cause a sizable performance hit.
Setup:
Two virtual machines running on top of an ESXi hypervisor
Tx: DPDK app (running on top of vmxnet3 PMD)
Rx: af_packet (running on top of a kernel vmxnet3 interface)
Packet size :68 (packet contains a vlan tag)
Rates:
Tx - 1.419 Mpps
Rx (without vlan insertion) - 1227636 pps
Rx (with vlan insertion) - 1220081 pps
At a first glance, we don't seem to have a large degradation in terms of
packet rate.
[1]
https://github.com/torvalds/linux/commit/bcc6d47903612c3861201cc3a866fb60
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support to fetch port and queue stats via xstats API. Also remove
queue stats from basic stats because they're now available via xstats
API for the VF.
Signed-off-by: Nikhil Vasoya <nikhil.vasoya@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>