4957 Commits

Author SHA1 Message Date
Miao Li
327fcd2d38 net/vhost: support power monitor
According to current semantics of power monitor, this commit adds a
callback function to decide whether aborts the sleep by checking
current value against the expected value and vhost_get_monitor_addr
to provide address to monitor. When no packet come in, the value of
address will not be changed and the running core will sleep. Once
packets arrive, the value of address will be changed and the running
core will wakeup.

Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
2021-10-29 12:32:29 +02:00
Miao Li
34fd4373ce vhost: add power monitor API
This commit defines rte_vhost_power_monitor_cond which is used to pass
some information to vhost driver. The information is including the
address to monitor, the expected value, the mask to extract value read
from 'addr', the value size of monitor address, the match flag used to
distinguish the value used to match something or not match something.

Vhost driver can use these information to fill rte_power_monitor_cond.

Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
2021-10-29 12:32:29 +02:00
Miao Li
64ac7e08f6 net/virtio: support power monitor
According to current semantics of power monitor, this commit adds a
callback function to decide whether aborts the sleep by checking
current value against the expected value and virtio_get_monitor_addr
to provide address to monitor. When no packet come in, the value of
address will not be changed and the running core will sleep. Once
packets arrive, the value of address will be changed and the running
core will wakeup.

Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
2021-10-29 12:32:29 +02:00
Maxime Coquelin
0c9d662070 net/virtio: support RSS
Provide the capability to update the hash key, hash types
and RETA table on the fly (without needing to stop/start
the device). However, the key length and the number of RETA
entries are fixed to 40B and 128 entries respectively. This
is done in order to simplify the design, but may be
revisited later as the Virtio spec provides this
flexibility.

Note that only VIRTIO_NET_F_RSS support is implemented,
VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
packet RSS hash calculated by the device into mbuf.rss, is
not yet supported.

Regarding the default RSS configuration, it has been
chosen to use the default Intel ixgbe key as default key,
and default RETA is a simple modulo between the hash and
the number of Rx queues.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-29 11:23:10 +02:00
Ajit Khaparde
9446d7fcd9 doc: remove obsolete option from bnxt guide
host-based-truflow devarg is not used anymore to enable host based
flow table management functionality TruFlow. Instead this feature is
now driven by a capability indicated by the firmware.

TruFlow is not in tech preview anymore. Update the doc accordingly.

Fixes: da3731e2ea00 ("net/bnxt: check FW capability to support TRUFLOW")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-28 19:58:54 +02:00
Ajit Khaparde
0e7bdac71d doc: update NIC feature matrix for bnxt
Support for runtime Rx/Tx queue setup and inner RSS is not updated.
Update feature matrix for bnxt PMD.

Fixes: 7ed45b1a7c0f ("net/bnxt: support RSS hash selection")
Fixes: 0105ea1296c9 ("net/bnxt: support runtime queue setup")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-28 19:58:54 +02:00
Radu Nicolau
6bc987ecb8 net/iavf: support IPsec inline crypto
Add support for inline crypto for IPsec, for ESP transport and
tunnel over IPv4 and IPv6, as well as supporting the offload for
ESP over UDP, and in conjunction with TSO for UDP and TCP flows.
Implement support for rte_security packet metadata

Add definition for IPsec descriptors, extend support for offload
in data and context descriptor to support

Add support to virtual channel mailbox for IPsec Crypto request
operations. IPsec Crypto requests receive an initial acknowledgment
from physical function driver of receipt of request and then an
asynchronous response with success/failure of request including any
response data.

Add enhanced descriptor debugging

Refactor of scalar tx burst function to support integration of offload

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:22:04 +02:00
Rongwei Liu
7299ab6822 net/mlx5: support socket direct mode bonding
In socket direct mode, it's possible to bind any two (maybe four
in future) PCIe devices with IDs like xxxx:xx:xx.x and
yyyy:yy:yy.y. Bonding member interfaces are unnecessary to have
the same PCIe domain/bus/device ID anymore,

Kernel driver uses "system_image_guid" to identify if devices can
be bound together or not. Sysfs "phys_switch_id" is used to get
"system_image_guid" of each network interface.

OFED 5.4+ is required to support "phys_switch_id".

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-26 13:24:20 +02:00
Sean Zhang
80a3233197 app/flow-perf: add destination ports parameter
Add optional destination ports parameter for port-id action.
The parameter is not must, and the value is 1 by default as before
if the parameter not provided.

For example:

$ dpdk-test-flow-perf -w 08:00.0,representor=[0,1] -- --transfer \
  --ingress --transfer --ether --portmask=0x2 --vxlan-encap \
  --port-id=0

This command means the rule created on representor 0 with port 0
as destination, since the portmask is 0x2 and dst-ports is 0:

$ dpdk-test-flow-perf -w 08:00.0,representor=[0,1] \
  -w 08:00.1,representor=[0,1]-- --transfer --ingress --transfer \
  --ether --portmask=0x12 --vxlan-encap --port-id=0,3

This command means the rules created on both representor 0 of PF 0
and PF 1, the destination port for the first representor is PF 0,
and the destination port for the other one is PF 1.

Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Reviewed-by: Wisam Jaddo <wisamm@nvidia.com>
2021-11-05 10:58:25 +01:00
Wojciech Liguzinski
44c730b0e3 sched: add PIE based congestion management
Implement PIE based congestion management based on rfc8033.

The Proportional Integral Controller Enhanced (PIE) algorithm works
by proactively dropping packets randomly.
PIE is implemented as more advanced queue management is required to
address the bufferbloat problem and provide desirable quality of
service to users.

Tests for PIE code added to test application.
Added PIE related information to documentation.

Signed-off-by: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
2021-11-04 15:41:49 +01:00
Vladimir Medvedkin
11c5b9b51a fib: add RIB extension size parameter
This patch adds a new parameter to the FIB configuration to specify
the size of the extension for internal RIB structure.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Tested-by: Conor Walsh <conor.walsh@intel.com>
2021-11-04 12:38:03 +01:00
Vladimir Medvedkin
31d7c06947 hash: add bulk Toeplitz hash implementation
This patch adds a bulk version for the Toeplitz hash implemented
with Galios Fields New Instructions (GFNI).

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-11-04 11:19:10 +01:00
Vladimir Medvedkin
4fd8c4cb0d hash: add new Toeplitz hash implementation
This patch add a new Toeplitz hash implementation using
Galios Fields New Instructions (GFNI).

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-11-04 11:19:10 +01:00
Ady Agbarih
9fa82d287f regex/mlx5: move RXP to CrSpace
Add patch for programming the regex database through ROF file,
using the firmware instead of manually through the software.
No need to setup the DB anymore, the regex-daemon is responsible
for that always.
In the new flow the regex driver only has to program ROF rules
by using set params DevX cmd, requires ROF mkey creation.
The rules file has to be read into 4KB aligned memory.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:14:48 +01:00
Zhihong Peng
6cc51b1293 mem: instrument allocator for ASan
This patch adds necessary hooks in the memory allocator for ASan.

This feature is currently available in DPDK only on Linux x86_64.
If other OS/architectures want to support it, ASAN_SHADOW_OFFSET must be
defined and RTE_MALLOC_ASAN must be set accordingly in meson.

Signed-off-by: Xueqin Lin <xueqin.lin@intel.com>
Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2021-10-29 16:25:03 +02:00
Zhihong Peng
6e0290250d build: enable AddressSanitizer
AddressSanitizer [1] a.k.a. ASan is a widely-used debugging tool to
detect memory access errors.
It helps to detect issues like use-after-free, various kinds of buffer
overruns in C/C++ programs, and other similar errors, as well as
printing out detailed debug information whenever an error is detected.

ASan is integrated with gcc and clang and can be enabled via a meson
option: -Db_sanitize=address
See the documentation for details (especially regarding clang).

Enabling ASan has an impact on performance since additional checks are
added to generated binaries.

Enabling ASan with Windows is currently not supported in DPDK.

1: https://github.com/google/sanitizers/wiki/AddressSanitizer

Signed-off-by: Xueqin Lin <xueqin.lin@intel.com>
Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
2021-10-29 15:25:34 +02:00
Anatoly Burakov
efb57dd5bb doc: clarify SRIOV activation with built-in VFIO
Currently, the documentation only contains instructions for enabling
SRIOV support for VFIO compiled as a module, but doesn't have any
instructions on how to do the same for cases where VFIO is built-in.
Add these instructions.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
2021-10-28 10:14:58 +02:00
Kevin Laatz
bb4141dbe5 examples/dma: rename ioat application example
Since the APIs have been updated from rawdev to dmadev, the application
should also be renamed to match. This patch also includes the documentation
updates for the renaming.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-27 16:53:39 +02:00
Kevin Laatz
5a5f46be19 examples/ioat: add option to control stats print interval
Add a command line option to control the interval between stats prints.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-27 16:53:39 +02:00
Konstantin Ananyev
8a256e97fb examples/ioat: add option to control maximum frame size
Add command line option for setting the max frame size.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-27 16:53:39 +02:00
Konstantin Ananyev
198db0c9e2 examples/ioat: add option to control DMA batch size
Add a commandline options to control the HW copy batch size in the
application.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-27 16:53:39 +02:00
Konstantin Ananyev
81ce0906ea examples/ioat: use always same lcore for enqueue/dequeue
Few changes in ioat sample behaviour:
- Always do SW copy for packet metadata (mbuf fields)
- Always use same lcore for both DMA requests enqueue and dequeue

Main reasons for that:
a) it is safer, as idxd PMD doesn't support MT safe enqueue/dequeue (yet).
b) sort of more apples to apples comparison with sw copy.
c) from my testing things are faster that way.

Documentation updates to reflect these changes are also included.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-27 16:53:39 +02:00
Sarosh Arif
b54403fd08 examples/l2fwd: add promiscuous mode option
The default behaviour of l2fwd is to exit if we are unable to turn
promiscuous mode on. On some aws instances turning promiscuous mode
on is not permitted. In such cases there should be a way to run the
application without promiscuous mode.

This patch allows user to turn promiscuous mode on via command line
parameter. l3fwd has a similar option available.

Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2021-10-25 22:31:53 +02:00
Wisam Jaddo
741bb850ee app/flow-perf: export some config as runtime options
Some options are needed in the runtime many times, so leaving
it during compilation is not correct. As a result some options
has been exported into command line options to be used at run
time.

The options exported are:
--txq=N
--rxq=N
--txd=N
--rxd=N
--mbuf-size=N
--mbuf-cache-size=N
--total-mbuf-count=N

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
2021-10-25 22:31:53 +02:00
Honnappa Nagarahalli
f6c6c686f1 eal: remove FINISHED lcore state
FINISHED state seems to be used to indicate that the worker's update
of the 'state' is not visible to other threads. There seems to be no
requirement to have such a state.

Since the FINISHED state is removed, the API rte_eal_wait_lcore
is updated to always return the status of the last function that
ran in the worker core.

Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-25 18:20:59 +02:00
David Marchand
65c2bbf41f doc: fix default mempool option in guides
This option should be prefixed with -- for consistency with others.

Fixes: a103a97e7191 ("eal: allow user to override default mempool driver")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
2021-10-25 17:12:52 +02:00
Olivier Matz
daa02b5cdd mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-24 13:37:43 +02:00
Anoob Joseph
fd390896f4 crypto/cnxk: allow different cores in pending queue
Rework pending queue to allow producer and consumer cores to be
different.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
2021-10-20 15:56:46 +02:00
Akhil Goyal
92cb130919 cryptodev: move device-specific structures
The device specific structures - rte_cryptodev
and rte_cryptodev_data are moved to cryptodev_pmd.h
to hide it from the applications.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-10-20 15:33:16 +02:00
Kai Ji
418c1a5499 test/crypto: enable chacha_poly PMD
An autotest is added for the new chacha20_poly1305 PMD.
A new test case is also added for SGL test.

Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 15:33:16 +02:00
Kai Ji
f166628854 crypto/ipsec_mb: add chacha_poly PMD
Add in new chacha20_poly1305 PMD to the ipsec_mb framework.

Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 15:33:16 +02:00
Piotr Bronowski
cde8df1bda crypto/ipsec_mb: move zuc PMD
This patch removes the crypto/zuc folder and gathers all zuc PMD
implementation specific details into two files,
pmd_zuc.c and pmd_zuc_priv.h in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 15:32:36 +02:00
Piotr Bronowski
5208d68d30 crypto/ipsec_mb: support snow3g digest appended ops
This patch enables out-of-place auth-cipher operations where
digest should be encrypted along with the rest of raw data.
It also adds support for partially encrypted digest when using
auth-cipher operations.

Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
4f1cfda59a crypto/ipsec_mb: move snow3g PMD
This patch removes the crypto/snow3g folder and gathers all snow3g PMD
implementation specific details into a single file,
pmd_snow3g.c in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
bc9ef81c42 crypto/ipsec_mb: move kasumi PMD
This patch removes the crypto/kasumi folder and gathers all kasumi PMD
implementation specific details into a single file,
pmd_kasumi.c in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
746825e5c0 crypto/ipsec_mb: move aesni_gcm PMD
This patch removes the crypto/aesni_gcm folder and gathers all
aesni-gcm PMD implementation specific details into a single file,
pmd_aesni_gcm.c in the crypto/ipsec_mb folder.
A redundant check for iv length is removed.

GCM ops are stored in the queue pair for multi process support, they
are updated during queue pair setup for both primary and secondary
processes.

GCM ops are also set per lcore for the CPU crypto mode.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Pablo de Lara
8c835018de crypto/ipsec_mb: support ZUC-256 for aesni_mb
Add support for ZUC-EEA3-256 and ZUC-EIA3-256.
Only 4-byte tags supported for now.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
918fd2f146 crypto/ipsec_mb: move aesni_mb PMD
This patch removes the crypto/aesni_mb folder and gathers all
aesni-mb PMD implementation specific details into a single file,
pmd_aesni_mb.c in crypto/ipsec_mb.

Now that intel-ipsec-mb v1.0 is the minimum supported version, old
macros can be replaced with the newer macros supported by this version.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Ciara Power
72a169278a crypto/ipsec_mb: support multi-process
The ipsec_mb SW PMD now has multiprocess support.
The queue-pair IMB_MGR is stored in a memzone instead of being allocated
externally by the Intel IPSec MB library, when v1.1 is used.
If v1.0 is used, multi process is not supported, and allocation is
done as before.
The secondary process needs to reconfigure the queue-pair to allow for
IMB_MGR function pointers be updated.

Intel IPsec MB library version 1.1 is required for this support.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Fan Zhang
c75542ae42 crypto/ipsec_mb: introduce IPsec_mb framework
This patch introduces the new framework to share common code between
the SW crypto PMDs that depend on the intel-ipsec-mb library.
This change helps to reduce future effort on the code maintenance and
feature updates.

The PMDs that will be added to this framework in subsequent patches are:
  - AESNI MB
  - AESNI GCM
  - CHACHA20_POLY1305
  - KASUMI
  - SNOW3G
  - ZUC

The use of these PMDs will not change, they will still be supported for
x86, and will use the same EAL args as before.

The minimum required version for the intel-ipsec-mb library is now v1.0.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Ferruh Yigit
295968d174 ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-22 18:15:38 +02:00
Ferruh Yigit
1aca4fdb00 doc: remove jumbo offload feature
Jumbo offload is no more announced as capability, and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag is removed.

This patch is also removing 'Jumbo frame' feature from documentation.

Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-22 17:26:07 +02:00
Chengchang Tang
2fc3e696a7 net/hns3: add runtime config for mailbox limit time
Current, the max waiting time for MBX response is 500ms, but in
some scenarios, it is not enough. Since it depends on the response
of the kernel mode driver, and its response time is related to the
scheduling of the system. In this special scenario, most of the
cores are isolated, and only a few cores are used for system
scheduling. When a large number of services are started, the
scheduling of the system will be very busy, and the reply of the
mbx message will time out, which will cause our PMD initialization
to fail.

This patch add a runtime config to set the max wait time. For the
above scenes, users can adjust the waiting time to a suitable value
by themselves.

Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-10-22 04:11:43 +02:00
Xueming Li
5984037501 app/testpmd: add forwarding engine for shared Rx queue
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
2021-10-22 00:09:19 +02:00
Xueming Li
f4d178c13b app/testpmd: add parameter for shared Rx queue
Adds "--rxq-share=X" parameter to enable shared RxQ.

Rx queue is shared if device supports, otherwise fallback to standard
RxQ.

Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-22 00:09:07 +02:00
Xueming Li
dd22740cc2 ethdev: introduce shared Rx queue
In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.

This patch introduces shared Rx queue. Ports in same Rx domain and
switch domain could share Rx queue set by specifying non-zero sharing
group in Rx queue configuration.

Shared Rx queue is identified by share_rxq field of Rx queue
configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using
same shared Rx queue ID.

No special API is defined to receive packets from shared Rx queue.
Polling any member port of a shared Rx queue receives packets of that
queue for all member ports, port_id is identified by mbuf->port. PMD is
responsible to resolve shared Rx queue from device and queue data.

Shared Rx queue must be polled in same thread or core, polling a queue
ID of any member port is essentially same.

Multiple share groups are supported. PMD should support mixed
configuration by allowing multiple share groups and non-shared Rx queue
on one port.

Example grouping and polling model to reflect service priority:
 Group1, 2 shared Rx queues per port: PF, rep0, rep1
 Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
 Core0: poll PF queue0
 Core1: poll PF queue1
 Core2: poll rep2 queue0

PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.

PMD is responsible for shared Rx queue consistency checks to avoid
member port's configuration contradict each other.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-22 00:08:50 +02:00
Satheesh Paul
00ea15e7a3 net/cnxk: support port ID flow action
This patch adds support for rte flow action type port_id to
enable directing packets from an input port PF to an output
port which is a VF of the input port PF.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-21 18:59:40 +02:00
William Tu
d1c7029a52 net/e1000: build on Windows
This patch enables building the e1000 driver for Windows.
I tested using two Windows VM on top of VMware Fusion,
creating two e1000 devices with device ID 0x10D3 (8274L),
verifying rx/tx works correctly using dpdk-testpmd.exe
rxonly and txonly mode.

Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
Tested-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tested-by: Pallavi Kadam <pallavi.kadam@intel.com>
2021-10-21 04:58:40 +02:00
Rongwei Liu
a89f6433aa net/mlx5: set Tx queue affinity in round-robin
Previously, we set txq affinity to 0 and let firmware
to perform round-robin when bonding. Firmware uses a
global counter to assign txq affinity to different
physical ports accord to remainder after division.

There are three dis-advantages:
1. The global counter is shared between kernel and dpdk.
2. After restarting pmd or port, the previous counter value
is reused, so the new affinity is unpredictable.
3. There is no way to get what affinity is set by firmware.

In this update, we will create several TISs up to the
number of bonding ports and bind each TIS to one PF port.

For each port, it will start to pick up TIS using its port
index. Upper layer application can quickly calculate each txq's
affinity without querying.

At DPDK layer, when creating txq with 2 bonding ports, the
affinity is set like:
port 0: 1-->2-->1-->2
port 1: 2-->1-->2-->1
port 2: 1-->2-->1-->2

Note: Only applicable to DevX api.
This affinity subjects to HW hash.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-21 12:37:00 +02:00
Jie Wang
748530f035 app/testpmd: support L2TPv2 and PPP protocol pattern
Add support for test-pmd to parse protocol pattern L2TPv2 and PPP.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-21 14:15:59 +02:00