Commit Graph

17064 Commits

Author SHA1 Message Date
Vijay Kumar Srivastava
b11961363b vdpa/sfc: support device configure and close
Implement vDPA ops dev_conf and dev_close for DMA mapping,
interrupt and virtqueue configurations.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-11-04 13:59:56 +01:00
Vijay Kumar Srivastava
340c4bd007 vdpa/sfc: get VFIO device file descriptor
Implement vDPA ops get_vfio_device_fd to get the VFIO device fd.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2021-11-04 13:59:56 +01:00
Vijay Kumar Srivastava
755e0fb08d vdpa/sfc: get max supported queue count
Implement vDPA ops get_queue_num to get the maximum number
of queues supported by the device.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2021-11-04 13:59:56 +01:00
Vijay Kumar Srivastava
f66a66e631 vdpa/sfc: support device and protocol features queries
Implement vDPA ops get_feature and get_protocol_features.
This patch retrieves device supported features and enables
protocol features.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2021-11-04 13:59:41 +01:00
Vijay Kumar Srivastava
6dad9a7353 vdpa/sfc: support device initialization
Add HW initialization and vDPA device registration support.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-11-04 13:45:37 +01:00
Vijay Kumar Srivastava
5e7596ba7c vdpa/sfc: introduce Xilinx vDPA driver
Add new vDPA PMD to support vDPA operations of Xilinx devices.
This patch implements probe and remove functions.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-11-04 13:43:23 +01:00
Mattias Rönnblom
aaf3b44c66 event/dsw: use maintenance facility
Set the RTE_EVENT_DEV_CAP_REQUIRES_MAINT flag, and perform DSW
background tasks on rte_event_maintain() calls.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Tested-by: Richard Eklycke <richard.eklycke@ericsson.com>
Tested-by: Liron Himi <lironh@marvell.com>
2021-11-04 13:28:07 +01:00
Pavan Nikhilesh
ea9ec3de0f event/cnxk: rework enqueue path
Rework SSO enqueue path for CN9K make it similar to CN10K
enqueue interface.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-11-04 08:41:25 +01:00
Pavan Nikhilesh
25d703151d event/cnxk: reduce workslot memory consumption
SSO group base addresses are always are always contiguous we
need not store all the base addresses in workslot memory, instead
just store the base address and compute the group address offset
when required.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-11-04 08:41:25 +01:00
Pavan Nikhilesh
671971c917 event/cnxk: fix packet Tx overflow
The transmit loop incorrectly assumes that nb_mbufs is always
a multiple of 4 when transmitting an event vector. The max
size of the vector might not be reached and pushed out early
due to timeout.

Fixes: 761a321acf ("event/cnxk: support vectorized Tx event fast path")
Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-11-04 08:41:25 +01:00
Pavan Nikhilesh
bd64a963d2 event/cnxk: use common XAQ pool functions
Use the common API to create and free XAQ pool.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-11-04 08:41:25 +01:00
Pavan Nikhilesh
49b0424ffb common/cnxk: add SSO XAQ pool create and free
Add common API to create and free SSO XAQ pool.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-11-04 08:41:25 +01:00
Przemyslaw Zegan
4badfb0205 common/qat: fix queue pairs number
This patch fixes incorrect number of queue pairs.

Fixes: 4c0d2ee23c ("crypto/qat: remove incorrect usage of bundle number")
Cc: stable@dpdk.org

Signed-off-by: Przemyslaw Zegan <przemyslawx.zegan@intel.com>
Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
0c4546de45 crypto/qat: add gen-specific implementation
This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
b6c82d2d0b crypto/qat: define gen-specific structs and functions
This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
f0f369a685 crypto/qat: unify device private data structure
This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
2d148597ce compress/qat: add gen-specific implementation
This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
4c6912d3ac compress/qat: define gen-specific structs and functions
This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
4c778f1a02 common/qat: add gen-specific queue implementation
This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Przemyslaw Zegan <przemyslawx.zegan@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
5dbc8beacf common/qat: add gen-specific queue pair function
This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
5438e4ec8b common/qat: add gen-specific device implementation
This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Fan Zhang
04dd78d109 common/qat: define gen-specific structs and functions
This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2021-11-04 19:46:27 +01:00
Vidya Sagar Velumuri
89b78a2e3d crypto/cnxk: fix IV length for ZUC-256
Fix supported IV length for ZUC 256
Add support in capability for 4 byte mac len for ZUC 256
Pack the last 8 bytes of IV to 6 bytes by ignoring the 2 msb bits of
each byte.

Fixes: 29742632ac ("crypto/cnxk: support ZUC with 256-bit key")

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
2021-11-04 19:46:27 +01:00
Vidya Sagar Velumuri
66a8a26f31 common/cnxk: fix ZUC constants
Use appropriate ZUC constants based on key length and mac length

Fixes: a90db80d7d ("common/cnxk: set key length for PDCP algos")

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
2021-11-04 19:46:27 +01:00
Raja Zidane
9ad776442d crypto/mlx5: support 1MB data-unit
Add 1MB data-unit length to the capability's bitmap.
Handle 1MB data-unit length in the mlx5 session create operation,
and expose its capability in the mlx5 capabilities.

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:46:27 +01:00
Archana Muniganti
3f956cea85 crypto/cnxk: support IPv6 mixed tunnel mode
Adds IPv6 mixed tunnel mode support for cn9k.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-11-04 19:46:27 +01:00
Archana Muniganti
30351b0f94 crypto/cnxk: update auth key size
Update auth key size in capabilities for to support
SHA256_HMAC for cn9k.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-11-04 19:46:27 +01:00
Archana Muniganti
b00ae6f8dd crypto/cnxk: support ESN and anti-replay on CN9K
Adds ESN and anti-replay support for lookaside IPsec
on CN9K platforms.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-11-04 19:46:27 +01:00
Anoob Joseph
fd1d6c95ec crypto/cnxk: support null authentication in IPsec
Add null auth support with lookaside IPsec on cn10k crypto PMDs.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-11-04 19:46:27 +01:00
Anoob Joseph
90a2ec4ae8 common/cnxk: add null authentication with IPsec
Add support for null auth with IPsec operations on cn10k.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-11-04 19:46:27 +01:00
Tejasree Kondoj
f063054f8a crypto/octeontx2: fix lookaside IPsec IPv6
Fixing IPv6 mixed tunnel mode support by updating
inputs to firmware.

Fixes: 4edede7bc6 ("crypto/octeontx2: support lookaside IPsec IPv6")
Cc: stable@dpdk.org

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-11-04 19:46:27 +01:00
Archana Muniganti
255204e653 crypto/octeontx2: fix ESN seqhi
For current pkt, previous seqhi is used instead of its
guessed seqhi. Fixed it.

Fixes: 5be562bc5b ("crypto/octeontx2: support IPsec ESN and anti-replay")
Cc: stable@dpdk.org

Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-11-04 19:46:27 +01:00
Raja Zidane
350e25fabd compress/mlx5: add block size option
Currently, the compression block size is 15 by default, which
is the maximum.

Add "log-block-size" devarg to select compression block size manually.
The value provided should be between 4 to 15.
Any out-of-range value will be defaulted to 15.

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:46:27 +01:00
Raja Zidane
b8871a7ec5 compress/mlx5: fix compression level configuration
The mlx5 compress PMD uses HW acceleration for the compress operations.
The mlx5 HW device has no level style mode, which does a tradeoff between
throughput and compression ratio, unlike SW drivers where the CPU is doing
the compress, and more CPU effort can cause a better compression ratio.
The mlx5 driver wrongly defined the Huffman block size configuration
according to the level that doesn't fill the level API requirement for
the tradeoff.

Remove the effect of the level configuration in compress operation.

Fixes: 237aad8824 ("compress/mlx5: fix compression level translation")
Fixes: 39a2c8715f ("compress/mlx5: add transformation operations")
Cc: stable@dpdk.org

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:43:14 +01:00
Kiran Kumar K
b85b329bd3 crypto/cnxk: fix bus error on RSA verify
While creating RSA session, private key length is not being
calculated properly. This is causing bus error on RSA verify.
This patch fix the issue with length calculation.

Fixes: 5a3513caeb ("crypto/cnxk: add asymmetric session")
Cc: stable@dpdk.org

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-11-04 19:43:14 +01:00
Arek Kusztal
867ba300f9 crypto/qat: fix uncleared cookies after operation
This commit fixes uncleared cookies issue when using
RSA algorithm.

Fixes: e2c5f4ea99 ("crypto/qat: support RSA in asym")
Cc: stable@dpdk.org

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2021-11-04 19:43:14 +01:00
Arek Kusztal
0a9e639403 crypto/qat: fix status in RSA decryption
This commit fixes not set crypto op status when decrypting
with RSA algorithm.

Fixes: e2c5f4ea99 ("crypto/qat: support RSA in asym")
Cc: stable@dpdk.org

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2021-11-04 19:43:14 +01:00
Tal Shnaiderman
b4a4fb7e5d crypto/mlx5: support on Windows
Add support for mlx5 crypto pmd on Windows OS.
Add changes to release note and PMD guide.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:43:14 +01:00
Tal Shnaiderman
5731efea6f drivers/crypto: move Windows build check
Remove the check and build failure from crypto/meson.build
in case building for Windows OS.

Add this check/failure in the meson.build file of each crypto PMD
that is not enforcing it to allow PMD support for Windows
per driver when applicable.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-11-04 19:43:14 +01:00
Tal Shnaiderman
ddcc44b5d3 crypto/mlx5: fix size of UMR WQE
The size of the UMR WQE allocated object is decided by a sizof
operation on the struct, however since the struct contains
a union of flexible array members this sizeof results can differ
between compilers.

GCC for example treats the union as 0 sized, MSVC adds a padding
of 16Bits.

To resolve the ambiguity the allocation size will be calculated
by the sizes of the members excluding the flexible union.

Fixes: a1978aa23b ("crypto/mlx5: add maximum segments configuration")
Cc: stable@dpdk.org

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:43:14 +01:00
Tal Shnaiderman
2f5dceff71 crypto/mlx5: replace mutex initializer
Remove the usage of PTHREAD_MUTEX_INITIALIZER which is not
supported in Windows and initialize priv_list_lock in RTE_INIT.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:43:14 +01:00
Tal Shnaiderman
11f99cfc88 common/mlx5: add Direct Verbs constants for Windows
Add needed DV enums used by the crypto PMD and missing
for Windows OS.

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-04 19:43:14 +01:00
Viacheslav Galaktionov
ce1f72dc4f net/sfc: allow control threads for counter queue polling
MAE counters can be polled from a control thread if no service core is
allocated for this.

Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2021-11-04 17:57:00 +01:00
Andrew Rybchenko
50448dd3ab net/sfc: merge Rx and Tx doorbell counters into one
Datapath queue is either Rx or Tx, so just one counter is sufficient
for doorbells. It can count Tx doorbells in the case of Tx queue and
Rx doorbells in the case of Rx queue.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-04 17:42:05 +01:00
Jiawen Wu
3d4f43ea31 net/txgbe: fix packet statistics
Fix specific length packet statistics caused by wrong register
addresses.

Fixes: 24a4c76aff ("net/txgbe: add error types and registers")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-11-04 17:17:28 +01:00
Huisong Li
ff6dc76e40 net/hns3: refactor multi-process initialization
Currently, the logic of the PF and VF initialization codes for multiple
process is the same. A common function can be extracted to initialize
and unload multiple process.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-04 15:11:32 +01:00
Huisong Li
443242212b net/hns3: unregister MP action on close for secondary
This patch fixes lack of unregistering MP action for secondary process
when PMD is closed.

Fixes: 9570b1fdbd ("net/hns3: check multi-process action register result")
Fixes: 23d4b61fee ("net/hns3: support multiple process")

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-04 15:11:32 +01:00
Huisong Li
841f869353 net/hns3: fix multi-process action register and unregister
The multi-process has the following problems:
1) After a port in primary process is closed, the mp action of the
   process is unregistered. Which will cause that other device in the
   primary process cannot respond to requests from secondary processes.
2) Because variable "hns3_inited" is set to true without returning an
   initial value, the mp action cannot be registered again after it is
   unregistered.
3) The mp action of primary and secondary process need to be registered
   only once regardless of port numbers in the process. That's what
   variable "hns3_inited" does. But the variable is difficult to
   understand.

This patch adds a hns3_process_local_data structure to resolve above
problems.

Fixes: 9570b1fdbd ("net/hns3: check multi-process action register result")
Fixes: 23d4b61fee ("net/hns3: support multiple process")

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-04 15:11:32 +01:00
Huisong Li
3232637177 net/hns3: fix secondary process reference count
The "secondary_cnt" will be increased when a secondary process
initialized. But the value of this variable is not decreased when the
secondary process exits, which causes the primary process senses that
the secondary process still exists. As a result, the primary process
fails to send messages to the secondary process after the secondary
process exits.

Fixes: 23d4b61fee ("net/hns3: support multiple process")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-04 15:11:32 +01:00
Dapeng Yu
3378e71244 net/ice: fix flow redirect
It's possible that a switch rule can't be redirect successfully due
to kernel driver is busy to handle an ongoing VF reset, so the
redirect action need to be deferred into next redirect request which
is promised by kernel driver after VF reset done.

This patch uses the saved flow rule's data to replay switch rule
remove/add during next flow redirect.

Fixes: 397b4b3c50 ("net/ice: enable flow redirect on switch")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-04 11:55:22 +01:00
Dapeng Yu
9fda31c322 net/ice: save rule on switch filter creation
The VSI number, lookup elements and rule information for creating switch
filter are abandoned when switch filter is created in original
implementation.

This patch saved the abandoned data in RTE flow, it is for future
use on replay when handling exception at flow redirect.

Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-04 11:54:36 +01:00
Yuying Zhang
7f89f41860 net/ice: fix order of flow filter parser list
The order of flow filter parser list was not definite and
linked to the register order of parsers. It caused ACL filter
covered by switch filter in some cases.

This patch fixed order of parser list to guarantee the usage
of each filter. Below lists the order.
ACL filter > Switch filter > FDIR > Hash filter.

Fixes: e4a0a7599d ("net/ice: fix flow priority support in non-pipeline mode")
Cc: stable@dpdk.org

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-04 11:47:43 +01:00
Tudor Cornea
f86d553cc1 net/af_packet: fix ignoring full ring on Tx
The poll call can return POLLERR which is ignored, or it can return
POLLOUT, even if there are no free frames in the mmap-ed area.

We can account for both of these cases by re-checking if the next
frame is empty before writing into it.

We have attempted to reproduce this issue with pktgen-dpdk, using the
following configuration.

pktgen -l 1-4 -n 4 --proc-type=primary --no-pci --no-telemetry \
    --no-huge -m 512 \
    --vdev=net_af_packet0,iface=eth1,blocksz=16384,framesz=8192, \
    framecnt=2048,qpairs=1,qdisc_bypass=0 \
    -- \
    -P \
    -T \
    -m "3.0" \
    -f themes/black-yellow.theme

We configure a low tx rate (~ 335 packets / second) and a small
packet size, of about 300 Bytes from the pktgen CLI.

set 0 size 300
set 0 rate 0.008
set 0 burst 1
start 0

After bringing the interface down, and up again, we seem to arrive
in a state in which the tx rate is inconsistent, and does not recover.

ifconfig eth1 down; sleep 7; ifconfig eth1 up

[1] http://code.dpdk.org/pktgen-dpdk/pktgen-20.11.2/source/INSTALL.md

Fixes: 364e08f2bb ("af_packet: add PMD for AF_PACKET-based virtual devices")
Cc: stable@dpdk.org

Signed-off-by: Mihai Pogonaru <pogonarumihai@gmail.com>
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-11-04 12:56:32 +01:00
Igor Romanov
b75d85b766 net/sfc: support Xilinx Riverhead VF
Add the device and vendor numbers to the PCI ID map so
that a VF can be probed.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-04 12:41:34 +01:00
John Daley
db79f2d5c9 net/enic: support GTP header flow matching
The GTP, GTP-U, GTP-C header fields can be matched, however NIC does not
support GTP tunneling so no items after the GTP header can be specified.
If a GTP-U or GTP-C item is specified without a preceding UDP item, the
UDP destination port is implicitly matched. For GTP, the destination UDP
port must be specified but its value is not enforced.

Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-11-04 12:34:46 +01:00
Ting Xu
1b9c68120a net/ice: enable protocol agnostic flow offloading in RSS
Enable protocol agnostic flow offloading to support raw pattern input
for RSS hash flow rule creation. It is based on Parser Library feature.
Current rte_flow raw API is utilized.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-04 04:15:29 +01:00
Ting Xu
0837da2e27 net/ice/base: support add HW profile for RSS raw flow
Based on the parser library, we can directly set HW profile and
associate VSI for RSS raw flows. Add symmetric hash configuration
for raw flow.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-04 04:10:43 +01:00
Hyong Youb Kim
af397b3c93 net/enic: avoid error message when no advanced filtering
Probing the availability of Flow Manager API may print the following
error log.

PMD: rte_enic_pmd: Devcmd 88 failed with error code -1

The error indicates a flow manager operation failed and happens when
advanced filtering is disabled on vNIC. It is harmless but confusing
to the user. Since advanced filtering is a prerequisite, check first
if it is available and avoid the error message altogether.

Fixes: ea7768b5bb ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2021-11-03 19:56:55 +01:00
Hyong Youb Kim
bcd68b6841 net/enic: fix crash caused by changing MTU
Changing MTU after the device start causes a segfault in the Rx
handler. The MTU handler (enic_set_mtu) performs the following steps.
1. Stop NIC Rx
2. Change Rx handler '(struct rte_eth_dev)->rx_pkt_burst' to
   the dummy handler and sleep a while to quiesce
3. Re-allocate/initialize Rx structures
4. Change Rx handler back to the real handler
   (e.g. enic_noscatter_recv_pkts)

enic_set_mtu does not update the recently introduced fast-path pointer
'(struct rte_eth_fp_ops)->rx_pkt_burst'. Since rte_eth_rx_burst only
uses the fast-path pointer, it keeps invoking the real Rx handler, not
the dummy one set by (2). And, (3) causes a segfault in the real Rx
handler (e.g. dereferencing freed structures).

Fix the segfault by updating the fast-path pointer as well.

Fixes: c87d435a4d ("ethdev: copy fast-path API into separate structure")

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2021-11-03 19:41:15 +01:00
Tomasz Duszynski
a90735a7a4 raw/cnxk_bphy: add header includes
Generally it is good practice to include all headers that provide APIs
which are being used. This is especially true in situations where 3rd
party apps include our public headers and assume that all should work
out of the box.

Including all headers explicitly helps to achieve that.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:15:13 +01:00
Tomasz Duszynski
6d72dce7ed raw/cnxk_bphy: keep leading zero in device name
Device naming might be misleading which is especially true if one takes
it from lspci output. In order to keep naming consistent keep leading
zero in front of pci bus number.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jakub Palider <jpalider@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:15:13 +01:00
Rakesh Kudurumalla
5ee3457b08 net/cnxk: integrate BPF count get mailbox
Bandwidth profile count is updated in meter capabilities during device
initialization using mbox interface.

Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:15:08 +01:00
Jakub Palider
2f6ac042ba raw/cnxk_bphy: remove dependencies from internal headers
This patch resolves problem with internal header
inclusion. In addition prevents C++ name mangling.

Signed-off-by: Jakub Palider <jpalider@marvell.com>
Reviewed-by: Tomasz Duszynski <tduszynski@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:12:29 +01:00
Harman Kalra
39ac394aa7 common/cnxk: fix device MSI-X greater than default value
Handling the case where number of MSIX interrupts are greater
than default value i.e. PLT_MAX_RXTX_INTR_VEC_ID. On PCI probe
device is queried for supported MSIX interrupts, and respective
interrupt resources are reallocated with this value. Same MSIX
count should be used while registering new interrupt vectors.

Fixes: 8cb5d08db9 ("interrupts: extend event list")

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:12:29 +01:00
Tomasz Duszynski
8dbdbee2f2 common/cnxk: fix typos
Fix a few typos.

Fixes: fa8f86a14e ("common/cnxk: add build infrastructre and HW definition")
Fixes: f6d567b03d ("common/cnxk: support NIX IRQ")
Fixes: 5e076b609f ("common/cnxk: add SE set key for crypto")
Cc: stable@dpdk.org

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:12:29 +01:00
Satha Rao
95ac15788b common/cnxk: consider adjust value for TM burst calculation
To support lower pps in packet mode we are changing adjust value,
same needs to be consider for burst size calculations.

When both peak and committed rates requested, then peak rate should
be larger than committed rate.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:12:29 +01:00
Rakesh Kudurumalla
67e1cbf3cf common/cnxk: change policer time unit to configured value
Ingress meter rate is calculated based on hardcoded
policer time unit. Patch adds mbox interface to
retrieve configured policer time unit.

Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:12:09 +01:00
Nithin Dabilpuram
a9729f7e14 event/cnxk: disable drop Rx error on vector enable
Disable drop_re i.e dropping packets with receive errors on
vector enable for few cn10k revisions due to HW errata.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Nithin Dabilpuram
dfe5f0a1f5 net/cnxk: allow FC on LBK and enable TM BP on Rx pause
Allow flow control on LBK VF's and enable TM to listen on
backpressure when Rx pause is enabled.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Srujana Challa
3c3ea76cff net/cnxk: support CPT CTX write through microcode op
Adds support to write CPT CTX through microcode op(SET_CTX/WRITE_SA)
for cn10k inline mode.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Nithin Dabilpuram
c89e976c5f common/cnxk: support changing drop Rx error flag
Added API to toggle drop_re flag after nix_lf_alloc() so that it
can be used to toggle it runtime.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Nithin Dabilpuram
0663a84524 common/cnxk: enable backpressure on CPT with inline inbound
Enable backpressure on CPT with inline inbound enabled.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Nithin Dabilpuram
58debb813a common/cnxk: enable TM to listen on Rx pause frames
Enable TM topology to listen on backpressure received when
Rx pause frame is enabled. Only one TM node in Tl3/TL2 per
channel can listen on backpressure on that channel.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Nithin Dabilpuram
31153442e1 common/cnxk: support flow control on loopback interface
Support flow control enable/disable on LBK VF's as HW
supports backpressure on LBK links.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Srujana Challa
2635c25d93 common/cnxk: support CPT CTX sync mailbox
Add CPT CTX sync mailbox API and flush IPsec inbound entries
at application exit.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Srujana Challa
71213a8b77 common/cnxk: support CPT CTX write through microcode op
Adds APIs to write CPT CTX through microcode op(SET_CTX/WRITE_SA).

Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-11-03 16:05:47 +01:00
Maxime Coquelin
ab4bb42406 vhost: rename driver callbacks struct
As previously announced, this patch renames struct
vhost_device_ops to struct rte_vhost_device_ops.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2021-11-03 11:59:27 +01:00
Maxime Coquelin
94c16e89d7 vhost: mark vDPA driver API as internal
This patch marks the vDPA driver APIs as internal and
rename the corresponding header file to vdpa_driver.h.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2021-11-03 09:11:34 +01:00
Junfeng Guo
25be39cc17 net/ice: enable protocol agnostic flow offloading in FDIR
Protocol agnostic flow offloading in Flow Director is enabled by this
patch based on the Parser Library, using existing rte_flow raw API.

Note that the raw flow requires:
1. byte string of raw target packet bits.
2. byte string of mask of target packet.

Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:

flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end

Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.

Also update the ice feature list with rte_flow item raw.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-03 13:01:31 +01:00
Junfeng Guo
8ebb93942b net/ice/base: add function to set HW profile for raw flow
Based on the parser library, we can directly set HW profile and
associate the main/ctrl vsi.

This patch set also updated the base code BSD release version.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-03 13:00:27 +01:00
Junfeng Guo
dea1ebd374 net/ice/base: add method to disable FDIR swap option
In this patch, we introduced a new parameter to enable/disable the
FDIR SWAP option by setting the swap and inset register set with
certain values.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-03 13:00:06 +01:00
Stephen Hemminger
211262d925 net/bnxt: fix firmware version query
UBSan testing revealed undefined shift here.

The firmware returns the version in bytes; and shifting a 8 bit
quantity here can lead to undefined behaviour or truncation.
The fix is to promote the bytes to 32 bit before shifting.

Bugzilla ID: 838
Fixes: 9a891c1764 ("net/bnxt: update HWRM to version 1.9.2")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-03 05:12:29 +01:00
Ivan Malov
69fbb4e9b5 net/sfc: ignore direction attributes in transfer flows
[1] has deprecated the use of direction attributes in "transfer"
flows. Ignore them during the transition period.

[1]
commit 9d2a349b38 ("ethdev: deprecate direction attributes in transfer flows")

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
46c6714ffd net/sfc: support port representor related flow actions
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT.

The former should be used instead of ambiguous PORT_ID.

The latter sends traffic to the entity represented by
the given ethdev (network port or VF).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
0fb3e8a910 net/sfc: support represented port flow item
Add support for item REPRESENTED_PORT to match on traffic entering
the embedded switch from the entity represented by the given
ethdev (network port or VF).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
79b28b4920 net/sfc: assign correct m-ports to independent switch ports
In accordance with patches [1-4], MAE admin ethdev represents a
network port and not the PF which it sits on. Rework the way
how "ethdev" and "entity" m-ports are assigned in SW switch
port entries of independent ethdevs. Explain in comments.

[1] commit 081e42dab1 ("ethdev: add port representor item to flow API")
[2] commit 49863ae2bf ("ethdev: add represented port item to flow API")
[3] commit 8edb6bc026 ("ethdev: add port representor action to flow API")
[4] commit 88caad251c ("ethdev: add represented port action to flow API")

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
b9b48ac751 net/sfc: improve m-port related log messages
Make these messages more specific.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
3419c9a7e5 net/sfc: rename ethdev m-port retrieval helper
The function in question has an unfortunate name that reads
like finding a SW switch port entry. In fact just one of
the two m-ports is retrieved from that entry.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
b7b7b9f800 net/sfc: do not allow flow rules to refer to VF representors
VF representors do not own dedicated m-ports and thus cannot
be referred to as traffic endpoints in flow items or actions.

Fixes: a62ec90522 ("net/sfc: add port representors infrastructure")
Fixes: f55b61cec9 ("net/sfc: support port representor flow item")

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Dmitry Kozlyuk
ec4e11d41d net/mlx5: preserve indirect actions on restart
MLX5 PMD uses reference counting to manage RX queue resources.
After port stop shared RSS actions kept references to RX queues,
preventing resource release. As a result, internal PMD mempool
for such queues had been exhausted after a number of port restarts.
Diagnostic message from rte_eth_dev_start():

    Rx queue allocation failed: Cannot allocate memory

Dereference RX queues used by indirect actions on port stop (detach)
and restore references on port start (attach) in order to allow RX queue
resource release, but keep indirect RSS across the port restart.
Replace queue IDs in HW by drop queue ID on detach and restore actual
queue IDs on attach.

When the port is stopped, create indirect RSS in the detached state.
As a result, MLX5 PMD is able to keep all its indirect actions
across port restart. Advertise this capability.

Fixes: 4b61b8774b ("ethdev: introduce indirect flow action")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-02 18:59:17 +01:00
Dmitry Kozlyuk
bc5bee028e net/mlx5: create drop queue using DevX
Drop queue creation and destruction were not implemented for DevX
flow engine and Verbs engine methods were used as a workaround.
Implement these methods for DevX so that there is a valid queue ID
that can be used regardless of queue configuration via API.

Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-02 18:59:17 +01:00
Dmitry Kozlyuk
c5042f93a4 net/mlx5: discover max flow priority using DevX
Maximum available flow priority was discovered using Verbs API
regardless of the selected flow engine. This required some Verbs
objects to be initialized in order to use DevX engine. Make priority
discovery an engine method and implement it for DevX using its API.

Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-02 18:59:17 +01:00
Dmitry Kozlyuk
2fe6f1b762 drivers/net: advertise no support for keeping flow rules
When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
the specified behavior is the same as it had been before
this bit was introduced. Explicitly reset it in all PMDs
supporting rte_flow API in order to attract the attention
of maintainers, who should eventually choose to advertise
the new capability or not. It is already known that
mlx4 and mlx5 will not support this capability.

For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
similar action is not performed,
because no PMD except mlx5 supports indirect actions.
Any PMD that starts doing so will anyway have to consider
all relevant API, including this capability.

Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-11-02 18:59:17 +01:00
Ciara Loftus
ae70cc6e89 net/af_xdp: use BPF link for XDP programs
Since v0.4.0, if the underlying kernel supports it, libbpf uses 'bpf
link' to manage the programs on the interfaces of the xsks. This has two
repercussions for the PMD.

1. In the case where the PMD asks libbpf to load the default XDP
   program, the PMD no longer needs to remove it on teardown. This is
   because bpf link handles the unloading under the hood.
2. In the case where the PMD loads a custom program, libbpf expects this
   program to be linked via bpf link prior to creating the socket.

This patch introduces probes for the libbpf version and kernel support
for bpf link and orchestrates the loading and unloading of
programs according to the capabilities of the kernel and libbpf. The
libbpf version is checked with meson and pkg-config. The probe for
kernel support mirrors how it is implemented in libbpf. A bpf_link is
created and looked up on loopback device. If successful, bpf_link will
be used for the AF_XDP netdev.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
2021-11-02 17:36:46 +01:00
Lior Margalit
a451287102 net/mlx5: fix RSS expansion with EtherType
The RSS expansion algorithm is using a graph to find the possible
expansion paths. A graph node with the 'explicit' flag will be skipped,
if it is not found in the flow pattern.
The current implementation misses a check for the explicit flag when
expanding the pattern according to ETH item with EtherType.
For example:
testpmd> flow create 0 ingress pattern eth / ipv6 / udp / vxlan / eth
type is 2048 / end actions rss level 2 types udp end / end
The "eth type is 2048" item in the pattern may be expanded to "ETH IPv4".
The ETH node in the expansion graph is followed by VLAN node marked as
explicit. The fix is to skip the VLAN node and continue the expansion
with its next nodes, IPv4 and IPv6.
The expansion paths for the above example will be:
ETH IPV6 UDP VXLAN ETH END
ETH IPV6 UDP VXLAN ETH IPV4 UDP END

Fixes: 69d268b4ff ("net/mlx5: fix RSS expansion for explicit graph node")
Cc: stable@dpdk.org

Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-01 14:53:37 +01:00
Jiawei Wang
7797b0fe64 net/mlx5: fix meter action pool protection
The ASO meter action with flows creation could be supported on
multiple threads. The meter pools were created to manage the meter
object resources, if there is no room in the current meter pool then
resize the meter pool to the new pool size and free the old one.

There's a race condition while one thread resizes the meter pool and
the old pool resource be freed, and another thread query the meter
object by index on the old pool, the return value is invalid.

This patch adds a read-write lock to protect the pool resource while
resizing and query.

Fixes: a5835d530f ("net/mlx5: optimize Rx queue match")
Cc: stable@dpdk.org

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-01 14:53:36 +01:00
Jiawei Wang
7cf2d15a39 net/mlx5: fix age action pool protection
The age action with flows creation could be supported on the multiple
threads. The age pools were created to manage the age resources, if
there is no room in the current pool then resize the age pool to the new
pool size and free the old one.

There's a race condition while one thread resizes the age pool and the
old pool resource be freed, and another thread query the age action
value of the old pool so the queried value is invalid.

This patch uses the read-write lock to protect the pool resource while
resizing and query.

Fixes: a5835d530f ("net/mlx5: optimize Rx queue match")
Cc: stable@dpdk.org

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-01 14:53:35 +01:00
Tomasz Duszynski
7ce1032edb raw/cnxk_bphy: support telemetry
Added /cnxk/bphy/info telemetry endpoint.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 17:59:35 +02:00
Ferruh Yigit
94f746aa66 net/txgbe: fix link negotiation value
Macro is changed unintentionally while adding RTE_ prefix, fixing the
original value.

Fixes: 295968d174 ("ethdev: add namespace")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-11-02 11:35:53 +01:00
Huisong Li
d6e5056ab3 net/hns3: unify multicast MAC address set list
This patch removes hns3vf_set_mc_mac_addr_list() and uses
hns3_set_mc_mac_addr_list() to do this.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
af16c53253 net/hns3: refactor multicast MAC address set for PF
Currently, when configuring a group of multicast MAC addresses, the PF
driver reorder mc_addr array in hw struct to remove multicast MAC
addresses that are not in mc_addr_set array from user and then adds new
multicast MAC addresses. Actually, it can be simplified by removing all
previous MAC addresses and then adding new MAC addresses.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
5022ab5cab net/hns3: unify multicast address check
This patch uniforms a common function to check multicast address
validity for PF and VF.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
f634872542 net/hns3: unify MAC address add and remove
The code logic of adding and removing MAC address in PF and VF is the
same.
This patch extracts two common interfaces to add and remove them
separately.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
0a856ba4ff net/hns3: unify MAC and multicast address configuration
Currently, the interface logic for adding and deleting all MAC address
and multicast address in PF and VF driver is the same. This patch
extracts two common interfaces to configure them separately.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
cc91ec13eb net/hns3: use HW ops to config MAC features
This patch uses APIs in hns3_hw_ops to configure MAC related features.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
b439aaa0a3 net/hns3: add HW ops structure to operate hardware
This patch adds hns3_hw_ops structure to operate hardware in PF and VF
driver.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
45a6c15dd5 net/hns3: remove redundant multicast removal interface
This patch removes redundant hns3_remove_mc_addr_common(), which can be
replaced by hns3_remove_mc_mac_addr().

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
43b86af27d net/hns3: rename unicast address removal function
This patch renames hns3_remove_uc_addr_common() to
hns3_remove_uc_mac_addr() in PF.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
05812092a5 net/hns3: remove redundant multicast MAC interface
This patch removes hns3_add_mc_addr_common() in PF and
hns3vf_add_mc_addr_common() in VF.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
3d491e5531 net/hns3: extract common interface to check duplicates
Extract a common interface for PF and VF to check whether the configured
multicast MAC address from rte_eth_dev_mac_addr_add() is the same as the
multicast MAC address from rte_eth_dev_set_mc_addr_list().

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
3e7984b5d8 net/hns3: rename multicast address removal function
This patch renames hns3_remove_mc_addr() to hns3_remove_mc_mac_addr().

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
d4f2503106 net/hns3: rename unicast address function
This patch renames hns3_add_uc_addr() to hns3_add_uc_mac_addr().

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-01 18:31:07 +01:00
Huisong Li
3f83246bda net/hns3: rename multicast address function
This patch renames hns3_add_mc_addr() to hns3_add_mc_mac_addr().

Signed-off-by: Huisong Li <lihuisong@huawei.com>
2021-11-01 18:31:07 +01:00
Kalesh AP
7bea87cd24 net/bnxt: fix stat context allocation
stat_ctx_alloc is called within the context of each rx/tx ring.
i.e from bnxt_alloc_hwrm_rx_ring and bnxt_alloc_hwrm_tx_ring().
So, there is no need to invoke bnxt_alloc_all_hwrm_stat_ctxs()
from bnxt_start_nic().

Fixes: 657c2a7f1d ("net/bnxt: create aggregation rings when needed")

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-31 17:02:36 +01:00
Kalesh AP
400405873b net/bnxt: fix freeing aggregation rings
During port stop, we clear "eth_dev->data->scattered_rx" at the
beginning. As a result, in bnxt_free_hwrm_rx_ring() the check
bnxt_need_agg_ring() returns false and we end up not freeing
the Rx aggregation rings which results in resource leak in the FW.

Fixes: 657c2a7f1d ("net/bnxt: create aggregation rings when needed")

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-31 17:02:36 +01:00
Simei Su
f9c561ffbc net/ice: fix performance for Rx timestamp
In Rx data path, it reads hardware registers per packet, resulting in
big performance drop. This patch improves performance from two aspects:
(1) replace per packet hardware register read by per burst.
(2) reduce hardware register read time from 3 to 2 when the low value of
    time is not close to overflow.

Meanwhile, this patch refines "ice_timesync_read_rx_timestamp" and
"ice_timesync_read_tx_timestamp" API in which
"ice_tstamp_convert_32b_64b" is also used.

Fixes: 953e74e6b7 ("net/ice: enable Rx timestamp on flex descriptor")
Fixes: 646dcbe6c7 ("net/ice: support IEEE 1588 PTP")

Suggested-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-01 02:21:10 +01:00
Ferruh Yigit
d01201829b net/i40e: fix 32-bit build
Got error with: gcc 11.2.1 "cc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)"

Build error:
In function ‘i40e_flow_parse_fdir_pattern’,
    inlined from ‘i40e_flow_parse_fdir_filter’
    at ../drivers/net/i40e/i40e_flow.c:3274:8:
../drivers/net/i40e/i40e_flow.c:3052:69:
    error: writing 1 byte into a region of size 0
    [-Werror=stringop-overflow=]
 3052 |                         filter->input.flow_ext.flexbytes[j] =
      |                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
 3053 |                                 raw_spec->pattern[i];
      |                                 ~~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/net/i40e/i40e_flow.c:25:
  ../drivers/net/i40e/i40e_flow.c:
  In function ‘i40e_flow_parse_fdir_filter’:
  ../drivers/net/i40e/i40e_ethdev.h:638:17:
  note: at offset 16 into destination object ‘flexbytes’ of size 16
  638 |         uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
      |                 ^~~~~~~~~

Fixing by adding range checks.

Fixes: 6ced3dd72f ("net/i40e: support flexible payload parsing for FDIR")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-01 02:15:25 +01:00
David Marchand
7f49dafe05 net/mlx5: do not close stdin on error
If for any reason, a socket could not be opened, mlx5_pmd_socket_init()
could close the 0 fd (which is valid, and has a fair chance to be stdin),
since server_socket == 0 from the variable being in .bss.

Fixes: e6cdc54cc0 ("net/mlx5: add socket server for external tools")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
2021-11-01 08:51:48 +01:00
Alexander Kozyrev
a5a0a43bc6 net/mlx5: allow meta modifications in legacy mode
The MODIFY_FIELD RTE action rejects copy to/from metadata
in case of the legacy mode extensive flow metadata support.
It is not consistent with SET_META action that has no such
restriction imposed. Registers A or B are used for META in
legacy mode. Allow meta modifications in legacy mode as well.

On other hand, SET_META rejects actions in case register C
is not available even though it is not needed in legacy mode.
Skip this check for legacy mode and allow setting META.

Fixes: edf325d421 ("net/mlx5: check extended metadata for meta modification")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-31 13:31:13 +01:00
Alexander Kozyrev
f4f8f5aee3 net/mlx5: fix Tx meta width for modify field flow rule
Register C is used for the metadata within NIC Rx domain.
And its width can vary from 0 to 32 bits depending on
its kernel usage. But it is not the case within NIC Tx domain,
register A is always 32 bits there. Fix metadata width detection
for the modify_field flow API within NIC Tx domain.

Fixes: 6d5735c1cb ("net/mlx5: fix meta register conversion for extensive mode")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-31 13:31:12 +01:00
Jiawen Wu
d0759b5098 net/ngbe: support Tx done cleanup
Add support for API rte_eth_tx_done_cleanup().

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
b7aad633b3 net/ngbe: support Rx and Tx descriptor status
Supports to get the number of used Rx descriptors,
and check the status of Rx and Tx descriptors.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
eec3e73693 net/ngbe: support Rx and Tx queue info
Add Rx and Tx queue information get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
24cd85f7e5 net/ngbe: support timesync
Add to support IEEE1588/802.1AS timestamping, and IEEE1588 timestamp
offload on Tx.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
71aec12796 net/ngbe: support register dump
Support to dump registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
9459ea29d1 net/ngbe: support EEPROM dump
Support to get and set device EEPROM data.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
4db3db296a net/ngbe: support device LED on/off
Support device LED on and off.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
f40e9f0e22 net/ngbe: support flow control
Support to get and set flow control.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
e2a289a788 net/ngbe: add mailbox process operations
Add check operation for vf function level reset,
mailbox messages and ack from vf.
Waiting to process the messages.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
60229dcfc4 net/ngbe: support SR-IOV
Initialize and configure PF module to support SRIOV.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
0779d7f619 net/ngbe: support RSS hash
Support RSS hashing on Rx, and configuration of RSS hash computation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
dee93977a6 net/ngbe: support MAC filters
Add MAC addresses to filter incoming packets, support to set
multicast addresses to filter. And support to set unicast table array.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
d4a3fe694d net/ngbe: support loopback mode
Support loopback operation mode.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
506abd4a8b net/ngbe: support FW version query
Add firmware version get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
b83372a030 net/ngbe: support device promiscuous and allmulticast mode
Support to enable/disable promiscuous and allmulticast mode for a port.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
07baabb6a5 net/ngbe: support MTU set
Support updating port MTU.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
8b433d04ad net/ngbe: support device xstats
Add device extended stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
fdb1e85197 net/ngbe: support basic statistics
Support to read and clear basic statistics.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
59b46438fd net/ngbe: support VLAN offload and VLAN filter
Support to set VLAN and QinQ offload, and filter of a VLAN tag
identifier.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
586e602837 net/ngbe: support jumbo frame
Add to support Rx jumbo frames.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
64b36e4af1 net/ngbe: support CRC offload
Support to strip or keep CRC in Rx path.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
d148a87e69 net/ngbe: support Rx/Tx burst mode info
Support to get Rx/Tx burst mode info.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
9f32061402 net/ngbe: support TSO
Add transmit datapath with offloads, and support TCP segmentation
offload.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
ffc959f5b3 net/ngbe: support Rx checksum offload
Support IP/L4 checksum on Rx, and convert it to mbuf flags.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
79f3128d4d net/ngbe: support scattered Rx
Add scattered Rx function to support receiving segmented mbufs.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
f6aef1dacf net/ngbe: support packet type query
Add packet type macro definition and convert ptype to ptid.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Min Hu (Connor)
599ef84add net/hns3: fix mailbox communication with HW
Mailbox is the communication mechanism between SW and HW. There exist
two approaches for SW to recognize mailbox message from HW. One way is
using match_id, the other is to compare the message code. The two
approaches are independent and used in different scenarios.

But for the second approach, "next_to_use" should be updated and written
to HW register. If it not done, HW do not know the position SW steps,
then, the communication between SW and HW will turn to be failed.

Fixes: dbbbad23e3 ("net/hns3: fix VF handling LSC event in secondary process")
Cc: stable@dpdk.org

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-10-29 17:47:04 +02:00
Volodymyr Fialko
2c3e50237c mempool/cnxk: postpone devargs parsing
Use roc_npa_lf_init_cb_register() scheme to register
callback for max_pools argument parsing.
This will remove the dependency on the order of PCI
devices probed.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 16:09:25 +02:00
Volodymyr Fialko
80e1239e77 common/cnxk: support ROC NPA init callback
Add support for registering callback for ROC NPA init.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 16:09:24 +02:00
Volodymyr Fialko
c52dd15813 mempool/cnxk: fix max pools argument parsing
roc_idev_npa_maxpools_set() expects max_pools original value,
not the AURA.

Fixes: 0a50a5aad2 ("mempool/cnxk: add device probe/remove")
Cc: stable@dpdk.org

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 16:09:21 +02:00
Nalla Pradeep
18f0606215 net/octeontx_ep: remove octeontx2 dependency
octeontx_ep driver's dependency on octeontx2 common code is
removed as going forward ep driver will include files from
its own path.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 16:09:18 +02:00
Kiran Kumar K
699d172916 common/cnxk: update mailbox version to 0xb
Sync mailbox definition with AF kernel driver.

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 16:09:14 +02:00
Kiran Kumar K
1586f56086 common/octeontx2: update mailbox version to 0xb
Sync mailbox definition with AF kernel driver.

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-29 16:08:17 +02:00
Ivan Ilchenko
3c3c54cfa6 net/virtio: fix link update in speed feature
Link update callback reports speed/duplex based on data
filled on device initialization. This is wrong in case of
VIRTIO_NET_F_SPEED_DUPLEX is negotiated since link could
be down at this time. Fix this function to actually
update the HW data in this case with respect to the fact
that specifying speed via devarg is a highest priority.

Fixes: 1357b4b362 ("net/virtio: support Virtio link speed feature")
Cc: stable@dpdk.org

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-29 12:32:30 +02:00
Miao Li
327fcd2d38 net/vhost: support power monitor
According to current semantics of power monitor, this commit adds a
callback function to decide whether aborts the sleep by checking
current value against the expected value and vhost_get_monitor_addr
to provide address to monitor. When no packet come in, the value of
address will not be changed and the running core will sleep. Once
packets arrive, the value of address will be changed and the running
core will wakeup.

Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
2021-10-29 12:32:29 +02:00
Miao Li
64ac7e08f6 net/virtio: support power monitor
According to current semantics of power monitor, this commit adds a
callback function to decide whether aborts the sleep by checking
current value against the expected value and virtio_get_monitor_addr
to provide address to monitor. When no packet come in, the value of
address will not be changed and the running core will sleep. Once
packets arrive, the value of address will be changed and the running
core will wakeup.

Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
2021-10-29 12:32:29 +02:00
Maxime Coquelin
5aeb7fab59 net/mlx5: fix RSS RETA update
This patch fixes RETA updating for entries above 64.
Without that, these entries are never updated as
calculated mask value will always be 0.

Fixes: 634efbc2c8 ("mlx5: support RETA query and update")
Cc: stable@dpdk.org

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-29 11:23:10 +02:00
Maxime Coquelin
0c9d662070 net/virtio: support RSS
Provide the capability to update the hash key, hash types
and RETA table on the fly (without needing to stop/start
the device). However, the key length and the number of RETA
entries are fixed to 40B and 128 entries respectively. This
is done in order to simplify the design, but may be
revisited later as the Virtio spec provides this
flexibility.

Note that only VIRTIO_NET_F_RSS support is implemented,
VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
packet RSS hash calculated by the device into mbuf.rss, is
not yet supported.

Regarding the default RSS configuration, it has been
chosen to use the default Intel ixgbe key as default key,
and default RETA is a simple modulo between the hash and
the number of Rx queues.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-29 11:23:10 +02:00
Ajit Khaparde
ff5d251f7c net/bnxt: remove stale compilation option
Remove a stale compile option from meson build file.
RTE_LIBRTE_BNXT_TF sneaked in incorrectly.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-28 19:58:54 +02:00
Dapeng Yu
63741c99a6 net/ice: remove VSI update on DCF reset by PF
After DCF is reset by PF, the VSI update service is unable to be
completed since the DCF resource is invalid.

This patch removes the call to service that updates VSI since it is
useless and output too many error messages.

Fixes: c7e1a1a3bf ("net/ice: refactor DCF VLAN handling")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-29 10:38:21 +02:00
Radu Nicolau
4bcfaf7316 net/iavf: add watchdog for VF FLR
Add watchdog to iAVF PMD which support monitoring the VFLR register. If
the device is not already in reset then if a VF reset in progress is
detected then notify user through callback and set into reset state.
If the device is already in reset then poll for completion of reset.

The watchdog is disabled by default, to enable it set
IAVF_DEV_WATCHDOG_PERIOD to a non zero value (microseconds)

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:22:25 +02:00
Radu Nicolau
ccb49b834c net/iavf: support xstats for inline IPsec crypto
Add per queue counters for maintaining statistics for inline IPsec
crypto offload, which can be retrieved through the
rte_security_session_stats_get() with more detailed errors through the
rte_ethdev xstats.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:22:15 +02:00
Radu Nicolau
6bc987ecb8 net/iavf: support IPsec inline crypto
Add support for inline crypto for IPsec, for ESP transport and
tunnel over IPv4 and IPv6, as well as supporting the offload for
ESP over UDP, and in conjunction with TSO for UDP and TCP flows.
Implement support for rte_security packet metadata

Add definition for IPsec descriptors, extend support for offload
in data and context descriptor to support

Add support to virtual channel mailbox for IPsec Crypto request
operations. IPsec Crypto requests receive an initial acknowledgment
from physical function driver of receipt of request and then an
asynchronous response with success/failure of request including any
response data.

Add enhanced descriptor debugging

Refactor of scalar tx burst function to support integration of offload

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:22:04 +02:00
Radu Nicolau
8410842505 net/iavf: support asynchronous virtual channel message
Add support for asynchronous virtual channel messages, specifically for
inline IPsec messages.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:19:57 +02:00
Radu Nicolau
1e728b0112 net/iavf: rework Tx path
Rework the Tx path and Tx descriptor usage in order to
allow for better use of offload flags and to facilitate enabling of
inline crypto offload feature.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:19:28 +02:00
Radu Nicolau
993f0d4d62 common/iavf: support IPsec inline crypto
Add support for inline crypto for IPsec.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-29 04:15:31 +02:00
Tejasree Kondoj
af5c990935 common/cnxk: fix build with -O1
Fixing build failure with EXTRA_CFLAGS='-O1'.

Fixes: d85f9749f9 ("common/cnxk: add hash generation API")

Reported-by: Longfeng Liang <longfengx.liang@intel.com>
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-10-28 14:54:59 +02:00
Kalesh AP
26ba9e7b91 net/bnxt: fix flow RSS failure handling
With commit 239695f754 ("net/bnxt: enhance RSS action support"),
when bnxt_hwrm_vnic_rss_cfg() call fails, driver was not setting
flow error using "rte_flow_error_set".

Fixes: 239695f754 ("net/bnxt: enhance RSS action support")

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-28 06:23:11 +02:00
Ajit Khaparde
43e7d2a30d net/bnxt: refactor Rx ring cleanup for representors
Rx ring for representors does not use aggregation rings for Rx.
Instead they use simple software buffers for handling Rx packets.
So there is no need to use the same cleanup routine as done by
the non-representor code path.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-27 03:52:02 +02:00
Ajit Khaparde
df07aa22d1 net/bnxt: fix RSS action parser
Minor fixes are needed in the RTE_FLOW RSS action parser.
1. Update the comment in the parser to indicate RSS level 1 implies RSS
   on outer header.
2. RSS action will not be supported if level is > 1.
3. RSS action will not be supported if user or application specifies
   MARK or COUNT action.
4. If RSS types is not specified i.e., is 0, the best effort RSS should
   use IPv4 and IPv6 headers. Currently we are considering only IPv4.

Fixes: 239695f754 ("net/bnxt: enhance RSS action support")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-27 03:50:39 +02:00
Kalesh AP
e046deb244 net/bnxt: fix RSS behavior on Thor
Move the Rx queue state update before bnxt_setup_one_vnic()
is called. For Thor, rxq->rx_started and eth_dev->data->rx_queue_state[]
needs to be set for all queues before bnxt_hwrm_vnic_cfg() or
bnxt_vnic_rss_configure() are called.

Fixes: 0105ea1296 ("net/bnxt: support runtime queue setup")
Cc: stable@dpdk.org

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-27 03:46:43 +02:00
Gregory Etelson
23b0a8b298 net/mlx5: fix integrity item validation and translation
Integrity item validation and translation must verify that integrity
item bits match L3 and L4 items in flow rule pattern.
For cases when integrity item was positioned before L3 header, such
verification must be split into two stages.
The first stage detects integrity flow item and makes initializations
for the second stage.
The second stage is activated after PMD completes processing of all
flow items in rule pattern. PMD accumulates information about flow
items in flow pattern. When all pattern flow items were processed,
PMD can apply that data to complete integrity item validation
and translation.

Fixes: 79f8952783 ("net/mlx5: support integrity flow item")
Cc: stable@dpdk.org

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-28 10:14:39 +02:00
Gregory Etelson
06741117ec net/mlx5: fix integrity match on inner and outer headers
MLX5 PMD can match on integrity bits for inner and outer headers in
a single flow.
That means a single flow rule can reference both inner and outer
integrity bits. That is implemented by adding 2 flow integrity items
to a rule - one item for outer integrity bits and other for
inner integrity bits.
Integrity item `level` parameter specifies what part is being
targeted.

Current PMD treated integrity items for outer and inner headers as
the same.
The patch separates PMD verifications for inner and outer integrity
items.

Fixes: 79f8952783 ("net/mlx5: support integrity flow item")
Cc: stable@dpdk.org

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-28 10:14:38 +02:00
Haifei Luo
a7ac7fae49 net/mlx5: enhance flow dump
Multiple rules could use the same encap_decap/modify_hdr/counter action.
The flow dump data could be duplicated.

To avoid redundancy, flow dump value is based on the actions' pointer
instead of previous rules' pointer.

For counter, the data is stored in cmng of priv->sh.
For encap_decap/modify_hdr, the data stored in encaps_decaps/modify_cmds.
Traverse the fields and get action's pointer and information.

Formats are same for information in the dump except "id" stands for
actions' pointer:
    Counter:     rec_type,id,hits,bytes
    Modify_hdr:  rec_type,id,actions_number,actions
    Encap_decap: rec_type,id,buf

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-28 10:14:21 +02:00
Jiawei Wang
3c4338a421 net/mlx5: optimize device spawn time with representors
During the device spawn process, mlx5 PMD queried the available flow
priorities by calling mlx5_flow_discover_priorities, queried
if the DR drop action was supported on the root table by calling
the mlx5_flow_discover_dr_action_support routine, and queried the
availability of metadata register C by calling mlx5_flow_discover_mreg_c

These functions created the test flows to get the supported fields, and
at the end destroyed the test flows. The test flows in the first two
functions was created on the root table.
If the device was spawned with multiple representors, these test flows
were created and destroyed on each representor as well. The above
operations took a significant amount of init time during the device
spawn.

This patch optimizes the device discover functions, if there is
the device with multiple representors (VF/SF) being spawned,
the priority and drop action and metadata register support check can be
done only ones and check results can be shared for all representors.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-27 14:04:39 +02:00
Sean Zhang
53712685d7 common/mlx5: optimize debug log
Remove debug log inside of mlx5_list_init to avoid flooding debug
messages when creating hash list with large actual size.

Fixes: 9c373c524b ("common/mlx5: move list utility from net driver")
Cc: stable@dpdk.org

Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-26 17:16:17 +02:00
Rongwei Liu
7299ab6822 net/mlx5: support socket direct mode bonding
In socket direct mode, it's possible to bind any two (maybe four
in future) PCIe devices with IDs like xxxx:xx:xx.x and
yyyy:yy:yy.y. Bonding member interfaces are unnecessary to have
the same PCIe domain/bus/device ID anymore,

Kernel driver uses "system_image_guid" to identify if devices can
be bound together or not. Sysfs "phys_switch_id" is used to get
"system_image_guid" of each network interface.

OFED 5.4+ is required to support "phys_switch_id".

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-26 13:24:20 +02:00
Rongwei Liu
4c74ad3e16 common/mlx5: support PCIe device GUID query
sysfs entry "phys_switch_id" holds each PCIe device'
guid.

The devices which reside in the same physical NIC should
have the same guid.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-26 11:26:14 +02:00
Dapeng Yu
627b3c5a39 net/iavf: fix shared data in multi-process
The shared pointer is initialized to a static local array defined in the
primary process and it shall not be accessed in the secondary process.

This patch copies the local data to shared data, to avoid data access
violation.

Fixes: 040b44551f ("net/iavf: unify Rx packet type table")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-28 01:52:05 +02:00
Dapeng Yu
20b631efe7 net/ice: fix function pointer in multi-process
This patch uses the index value to call the function, instead of the
function pointer assignment to save the selection of Receive Flex
Descriptor profile ID.

Otherwise the secondary process will run with wrong function address
from primary process.

Fixes: 7a340b0b4e ("net/ice: refactor Rx FlexiMD handling")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-27 05:29:39 +02:00
Dapeng Yu
b4f0d4ab66 net/ice: workaround DCF reset failure
After DCF is reset by PF, the DCF device un-initialization cannot
function normally, ignore the failure does not help since the kernel
does not clean up resource.

The patch workaround the issue by triggering an additional DCF enable/
disable cycle when a passive reset is detected.

Fixes: 1a86f4dbdf ("net/ice: support DCF device reset")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-27 05:25:25 +02:00
Ferruh Yigit
411878ba25 net/memif: fix driver init with default MTU
Driver is using 'ETH_FRAME_LEN' Linux defined value as max frame length,
which doesn't include FCS (4 bytes CRC). But ethdev by default uses
frame size with FCS when application doesn't define any explicit value.

As a result device configuration fails because device is tried to be
configured with a frame size length that is bigger than what device
reported as supported. Device reports as max supported frame size is
1514 but configured value is 1518.

Instead use DPDK macro, 'RTE_ETHER_MAX_LEN', that includes FCS in the
driver to report the max supported frame size, this matches to the
initial intention.

Fixes: 1bb4a528c4 ("ethdev: fix max Rx packet length")

Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-27 17:48:51 +02:00
Ferruh Yigit
4e8a910719 net/af_packet: fix driver init with default MTU
Driver is using 'ETH_FRAME_LEN' Linux defined value as max frame length,
which doesn't include FCS (4 bytes CRC). But ethdev by default uses
frame size with FCS when application doesn't define any explicit value.

As a result device configuration fails because device is tried to be
configured with a frame size length that is bigger than what device
reported as supported. Device reports as max supported frame size is
1514 but configured value is 1518.

Instead use DPDK macro, 'RTE_ETHER_MAX_LEN', that includes FCS in the
driver to report the max supported frame size, this matches to the
initial intention.

Fixes: 1bb4a528c4 ("ethdev: fix max Rx packet length")

Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-27 17:44:49 +02:00
Wojciech Liguzinski
44c730b0e3 sched: add PIE based congestion management
Implement PIE based congestion management based on rfc8033.

The Proportional Integral Controller Enhanced (PIE) algorithm works
by proactively dropping packets randomly.
PIE is implemented as more advanced queue management is required to
address the bufferbloat problem and provide desirable quality of
service to users.

Tests for PIE code added to test application.
Added PIE related information to documentation.

Signed-off-by: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
2021-11-04 15:41:49 +01:00
David Marchand
f2777b53b1 bus/pci: fix use after free on unplug
rte_pci_unmap_device() needs intr_handle objects to unregister
callbacks.

Bugzilla ID: 845
Fixes: d61138d4f0 ("drivers: remove direct access to interrupt handle")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Tested-by: Yan Xia <yanx.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-11-04 15:13:41 +01:00
Ady Agbarih
b5832a0d15 regex/mlx5: prevent double setup of queue pair
When mlx5_regex_qp_setup() is called, make sure
the provided QP is not already setup.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:15:10 +01:00
Francis Kelly
02179f82b9 regex/mlx5: remove RXP CSR file
The mlx5_rxp_csrs.h file has been deprecated as
its contents has now been moved to FW.

Signed-off-by: Francis Kelly <fkelly@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:14:54 +01:00
Ady Agbarih
7281f194fb regex/mlx5: remove engine start/stop commands
Remove the engine start/stop DevX commands,
as they have been deprecated and moved to FW.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
2021-11-03 23:14:51 +01:00
Ady Agbarih
9fa82d287f regex/mlx5: move RXP to CrSpace
Add patch for programming the regex database through ROF file,
using the firmware instead of manually through the software.
No need to setup the DB anymore, the regex-daemon is responsible
for that always.
In the new flow the regex driver only has to program ROF rules
by using set params DevX cmd, requires ROF mkey creation.
The rules file has to be read into 4KB aligned memory.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:14:48 +01:00
Ady Agbarih
ab2e0b0d35 regex/mlx5: remove register read/write
Remove the set/query regexp register commands from DevX.
Remove functions that used these commands.
Remove manual rules programming.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:14:45 +01:00
Ady Agbarih
1663c1405a common/mlx5: update regex DevX commands
This patch modifies the SET_REGEXP_PARAMS DevX command as follows:

Remove DB setup DevX command. The command is no longer needed
in DPDK, it will always be invoked by the regex-daemon.

Add new DevX command, for programming ROF rules for a specific engine.
The command takes as an input an mkey of the ROF.
It also introduces a new field_select bit.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:14:29 +01:00
Ori Kam
fe37533668 regex/mlx5: add cleanup on stop
When stopping the device we should release all
data allocated.

After rte_regexdev_configure(), the QPs are pre-allocated,
and will be configured only in rte_regexdev_queue_pair_setup().
That's why the QP jobs array initialization is checked
before attempting to destroy the QP.

Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
2021-11-03 23:14:24 +01:00
Ady Agbarih
2044860ebd common/mlx5: update PRM definitions for regex
Update PRM hca capabilities definitions as follows:
regexp_version field added - specifies whether BF2 or BF3
regexp field removed
regexp_params field moved
regexp_log_crspace_size field removed
regexp_mmo added - specifies if using regex mmo wqe is supported

Allow regex only if both regexp_params and regexp_mmo are set,
instead of checking regexp_mmo only.

Check version through the new capability field regexp_version instead
of reading crspace register.

Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-11-03 23:14:19 +01:00
David Marchand
f88b0b8922 devtools: forbid indent with tabs in Meson
The rule for indentation in Meson in DPDK is 4 spaces.

Any tab should be flagged as an issue, let's extend the check and fix
existing offenders.

Fixes: 4ad4b20a79 ("drivers: change indentation in build files")
Fixes: 2457705e64 ("crypto/cnxk: add driver skeleton")
Fixes: 634b731044 ("app/testpmd: build on Windows")
Fixes: 3a6bfc37ea ("net/ice: support QoS config VF bandwidth in DCF")
Fixes: 8ef09fdc50 ("build: add optional NUMA and CPU counts detection")
Fixes: e1369718f5 ("common/octeontx: enable build only on 64-bit Linux")
Fixes: 2b504721bf ("app/bbdev: enable la12xx")
Fixes: 6cc51b1293 ("mem: instrument allocator for ASan")
Fixes: c75542ae42 ("crypto/ipsec_mb: introduce IPsec_mb framework")
Fixes: 918fd2f146 ("crypto/ipsec_mb: move aesni_mb PMD")
Fixes: 746825e5c0 ("crypto/ipsec_mb: move aesni_gcm PMD")
Fixes: bc9ef81c42 ("crypto/ipsec_mb: move kasumi PMD")
Fixes: 4f1cfda59a ("crypto/ipsec_mb: move snow3g PMD")
Fixes: cde8df1bda ("crypto/ipsec_mb: move zuc PMD")
Fixes: f166628854 ("crypto/ipsec_mb: add chacha_poly PMD")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
2021-11-02 19:25:30 +01:00
David Marchand
eb89595d45 bus/pci: resize interrupt event list only for MSIX
Resizing event list only makes sense in MSIX case.

Besides, event list has always been RTE_MAX_RXTX_INTR_VEC_ID large.
Let's restore this assumption for code that might rely on this property
and only enlarge the event list when necessary.

Bugzilla ID: 843, 865
Fixes: 8cb5d08db9 ("interrupts: extend event list")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Harman Kalra <hkalra@marvell.com>
2021-10-29 14:40:46 +02:00
Kevin Laatz
452c1916b0 dma/idxd: fix truncated error code in status check
When checking if the DMA device is active, the result of the operand will
always be zero since the err_code is truncated to 8 bits which makes
checking the 31st bit impossible.

This is fixed by changing the type of err_code to uint32_t so that it is
not truncated.

Coverity issue: 373657
Fixes: 9449330a84 ("dma/idxd: create dmadev instances on PCI probe")

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-27 17:01:56 +02:00
Harman Kalra
8cb5d08db9 interrupts: extend event list
Dynamically allocating the efds and elist array of intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tested-by: Raslan Darawsheh <rasland@nvidia.com>
2021-10-25 21:20:12 +02:00
Harman Kalra
d61138d4f0 drivers: remove direct access to interrupt handle
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers access the interrupt handle fields.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Tested-by: Raslan Darawsheh <rasland@nvidia.com>
2021-10-25 21:20:12 +02:00
Honnappa Nagarahalli
f6c6c686f1 eal: remove FINISHED lcore state
FINISHED state seems to be used to indicate that the worker's update
of the 'state' is not visible to other threads. There seems to be no
requirement to have such a state.

Since the FINISHED state is removed, the API rte_eal_wait_lcore
is updated to always return the status of the last function that
ran in the worker core.

Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-25 18:20:59 +02:00
Olivier Matz
daa02b5cdd mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-24 13:37:43 +02:00
Olivier Matz
5b63493241 mbuf: mark old VLAN offload flags as deprecated
The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
marked as deprecated since commit 380a7aab1a ("mbuf: rename deprecated
VLAN flags") (2017). But they were not using the RTE_DEPRECATED
macro, because it did not exist at this time. Add it, and replace
usage of these flags.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-24 13:30:40 +02:00
Raja Zidane
2efd265445 compress/mlx5: support partial transformation
Currently compress, decompress and dma are allowed
only when all 3 capabilities are on.
A case where the user wants decompress offload, if
decompress capability is on but one of compress,
dma is off, is not allowed.
Split compress/decompress/dma support check to allow
partial transformations.

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-20 16:01:45 +02:00
Anoob Joseph
fd390896f4 crypto/cnxk: allow different cores in pending queue
Rework pending queue to allow producer and consumer cores to be
different.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
2021-10-20 15:56:46 +02:00
Anoob Joseph
a455fd869c common/cnxk: align CPT queue depth to power of 2
Use CPT LF queue depth as power of 2 to aid in masked checks for pending
queue.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-20 15:56:46 +02:00
Akhil Goyal
92cb130919 cryptodev: move device-specific structures
The device specific structures - rte_cryptodev
and rte_cryptodev_data are moved to cryptodev_pmd.h
to hide it from the applications.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-10-20 15:33:16 +02:00
Akhil Goyal
d54c72ec15 drivers/crypto: invoke probing finish function
Invoke event_dev_probing_finish() function at the end of probing,
this function sets the function pointers in the fp_ops flat array
in case of secondary process.
For primary process, fp_ops is updated in rte_cryptodev_start().

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-20 15:33:16 +02:00
Akhil Goyal
beb4c305b3 crypto/scheduler: use proper API for device start/stop
The worker PMDs were using direct device start/stop
functions rather than rte_cryptodev_start(),
so rte_crypto_fp_ops never get set. This patch calls
the rte_cryptodev_start and stop APIs which start and
stop devices properly and fp_ops get set.

Reported-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-10-20 15:33:16 +02:00
Kai Ji
f166628854 crypto/ipsec_mb: add chacha_poly PMD
Add in new chacha20_poly1305 PMD to the ipsec_mb framework.

Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 15:33:16 +02:00
Piotr Bronowski
cde8df1bda crypto/ipsec_mb: move zuc PMD
This patch removes the crypto/zuc folder and gathers all zuc PMD
implementation specific details into two files,
pmd_zuc.c and pmd_zuc_priv.h in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 15:32:36 +02:00
Piotr Bronowski
5208d68d30 crypto/ipsec_mb: support snow3g digest appended ops
This patch enables out-of-place auth-cipher operations where
digest should be encrypted along with the rest of raw data.
It also adds support for partially encrypted digest when using
auth-cipher operations.

Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
4f1cfda59a crypto/ipsec_mb: move snow3g PMD
This patch removes the crypto/snow3g folder and gathers all snow3g PMD
implementation specific details into a single file,
pmd_snow3g.c in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
bc9ef81c42 crypto/ipsec_mb: move kasumi PMD
This patch removes the crypto/kasumi folder and gathers all kasumi PMD
implementation specific details into a single file,
pmd_kasumi.c in the crypto/ipsec_mb folder.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
746825e5c0 crypto/ipsec_mb: move aesni_gcm PMD
This patch removes the crypto/aesni_gcm folder and gathers all
aesni-gcm PMD implementation specific details into a single file,
pmd_aesni_gcm.c in the crypto/ipsec_mb folder.
A redundant check for iv length is removed.

GCM ops are stored in the queue pair for multi process support, they
are updated during queue pair setup for both primary and secondary
processes.

GCM ops are also set per lcore for the CPU crypto mode.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Pablo de Lara
8c835018de crypto/ipsec_mb: support ZUC-256 for aesni_mb
Add support for ZUC-EEA3-256 and ZUC-EIA3-256.
Only 4-byte tags supported for now.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Piotr Bronowski
918fd2f146 crypto/ipsec_mb: move aesni_mb PMD
This patch removes the crypto/aesni_mb folder and gathers all
aesni-mb PMD implementation specific details into a single file,
pmd_aesni_mb.c in crypto/ipsec_mb.

Now that intel-ipsec-mb v1.0 is the minimum supported version, old
macros can be replaced with the newer macros supported by this version.

Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Ciara Power
72a169278a crypto/ipsec_mb: support multi-process
The ipsec_mb SW PMD now has multiprocess support.
The queue-pair IMB_MGR is stored in a memzone instead of being allocated
externally by the Intel IPSec MB library, when v1.1 is used.
If v1.0 is used, multi process is not supported, and allocation is
done as before.
The secondary process needs to reconfigure the queue-pair to allow for
IMB_MGR function pointers be updated.

Intel IPsec MB library version 1.1 is required for this support.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Fan Zhang
c75542ae42 crypto/ipsec_mb: introduce IPsec_mb framework
This patch introduces the new framework to share common code between
the SW crypto PMDs that depend on the intel-ipsec-mb library.
This change helps to reduce future effort on the code maintenance and
feature updates.

The PMDs that will be added to this framework in subsequent patches are:
  - AESNI MB
  - AESNI GCM
  - CHACHA20_POLY1305
  - KASUMI
  - SNOW3G
  - ZUC

The use of these PMDs will not change, they will still be supported for
x86, and will use the same EAL args as before.

The minimum required version for the intel-ipsec-mb library is now v1.0.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-20 12:06:01 +02:00
Ferruh Yigit
295968d174 ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-22 18:15:38 +02:00
Ferruh Yigit
ede6356582 drivers/net: fix removing jumbo offload flag
After DEV_RX_OFFLOAD_JUMBO_FRAME flag removed, drivers give jumbo frame
decisions based on MTU value checks, but some of the checks were wrong
by mistake, causing device initialization to fail, fixing them.

Fixes: b563c14212 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
2021-10-22 17:44:18 +02:00
Ciara Loftus
985e7673c0 net/af_xdp: fix max Rx packet length
Commit 1bb4a528c4 ("ethdev: fix max Rx packet length") clarified the
expected usage of the max_rx_pktlen and max_mtu values and implemented
some extra checks on these values to ensure they are sane. After this,
the AF_XDP PMD fails to initialise. The value for max_rx_pktlen which
represents the max size of the Ethernet frame was set to ETH_FRAME_LEN
(1514) and the max_mtu which represents the size of the payload was set
to the max size of the Ethernet frame. This did not make sense, as
naturally the maximum frame size should be greater than the payload
size.

Fix this by setting the max_rx_pktlen equal to the max size of the
Ethernet frame as expected, and the max MTU equal to the max_rx_pktlen
less the overhead which is set to the size of an Ethernet header plus
CRC.

Fixes: 1bb4a528c4 ("ethdev: fix max Rx packet length")

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-22 17:12:50 +02:00
Chengchang Tang
2fc3e696a7 net/hns3: add runtime config for mailbox limit time
Current, the max waiting time for MBX response is 500ms, but in
some scenarios, it is not enough. Since it depends on the response
of the kernel mode driver, and its response time is related to the
scheduling of the system. In this special scenario, most of the
cores are isolated, and only a few cores are used for system
scheduling. When a large number of services are started, the
scheduling of the system will be very busy, and the reply of the
mbx message will time out, which will cause our PMD initialization
to fail.

This patch add a runtime config to set the max wait time. For the
above scenes, users can adjust the waiting time to a suitable value
by themselves.

Fixes: 463e748964 ("net/hns3: support mailbox")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-10-22 04:11:43 +02:00
Satheesh Paul
00ea15e7a3 net/cnxk: support port ID flow action
This patch adds support for rte flow action type port_id to
enable directing packets from an input port PF to an output
port which is a VF of the input port PF.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-21 18:59:40 +02:00
Satheesh Paul
15f0b8a5b9 common/cnxk: support port ID action
This patch adds ROC API to support flow port ID action type.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-21 18:58:50 +02:00
Xuan Ding
ad6f01945a net/virtio: fix avail descriptor ID
Vhost will update desc’s Buffer ID advance to next used descriptor when
VIRTIO_F_IN_ORDER feature negotiated. When virtio reuses the descriptor,
the Buffer ID should be restored even VIRTQ_DESC_F_INDIRECT
feature negotiated.

Fixes: b473061b0e ("net/virtio: fix indirect descriptors in packed datapaths")
Cc: stable@dpdk.org

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Yong Liu <yong.liu@intel.com>
Signed-off-by: Miao Li <miao.li@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:21 +02:00
Gaoxiang Liu
028f06e8be net/vhost: merge stats loop in datapath
To improve performance in vhost Tx/Rx, merge vhost stats loop.
eth_vhost_tx has 2 loop of send num iteraion.
It can be merge into one.
eth_vhost_rx has the same issue as Tx.

Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:21 +02:00
Xuan Ding
04bcc80204 net/virtio: fix indirect descriptor reconnection
Add initialization for packed ring indirect descriptors
in reconnection path.

Fixes: 381f39ebb7 ("net/virtio: fix packed ring indirect descricptors setup")
Cc: stable@dpdk.org

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:21 +02:00
Ivan Malov
6474b59448 net/virtio: fix Tx checksum for tunnel packets
Tx prepare method calls rte_net_intel_cksum_prepare(), which
handles tunnel packets correctly, but Tx burst path does not
take tunnel presence into account when computing the offsets.

Fixes: 58169a9c81 ("net/virtio: support Tx checksum offload")
Cc: stable@dpdk.org

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
2021-10-21 14:24:21 +02:00
Marvin Liu
99ebada2d6 net/virtio: fix oversized packets in vectorized Rx
If packed ring size is not power of two, it is possible that remained
number less than one batch and meanwhile batch operation can pass.
This will cause incorrect remained number calculation and then lead to
receiving oversized packets. The patch fixed the issue by added
remained number check before batch operation.

Fixes: 77d66da838 ("net/virtio: add vectorized packed ring Rx")
Cc: stable@dpdk.org

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:21 +02:00
Xueming Li
8011a09add vdpa/mlx5: retry VAR allocation during vDPA restart
VAR is the device memory space for the virtio queues doorbells,
Qemu could mmap it to directly to speed up doorbell push.

On a busy system, Qemu takes time to release VAR resources during driver
shutdown. If vdpa restarted quickly, the VAR allocation failed with
error 28 since the VAR is singleton resource per device.

This patch adds retry mechanism for VAR allocation.

Fixes: 4cae722c1b ("vdpa/mlx5: move virtual doorbell alloc to probe")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:21 +02:00
Xueming Li
d38a53b175 vdpa/mlx5: workaround FW first completion in start
After a vDPA application restart, Qemu restores VQ with used and
available index, new incoming packet triggers virtio driver to
handle buffers. Under heavy traffic, no available buffer for
firmware to receive new packets, no Rx interrupts generated,
driver is stuck on endless interrupt waiting.

As a firmware workaround, this patch sends a notification after
VQ setup to ask driver handling buffers and filling new buffers.

Fixes: bff7350110 ("vdpa/mlx5: prepare virtio queues")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:21 +02:00
Zhihong Peng
84cc857b5d net/virtio: fix check scatter on all Rx queues
This patch fixes the wrong way to obtain virtqueue.
The end of virtqueue cannot be judged based on whether
the array is NULL.

Fixes: 4e8169eb0d ("net/virtio: fix Rx scatter offload")
Cc: stable@dpdk.org

Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-10-21 14:24:13 +02:00
Ting Xu
f5ec6a3a19 net/ice: fix TM hierarchy commit flag reset
After DCF commits TM hierarchy configuration, the commit flag is set to
avoid duplicated commit. But the flag is not reset after device stop,
which prevents the update of hierarchy configuration unless close the
device. It is not reasonable. This patch fix to reset the commit flag
after device stop. Then users can delete and add nodes to commit a new
TM hierarchy configuration.

Fixes: 3a6bfc37ea ("net/ice: support QoS config VF bandwidth in DCF")
Cc: stable@dpdk.org

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-21 13:32:26 +02:00
William Tu
d1c7029a52 net/e1000: build on Windows
This patch enables building the e1000 driver for Windows.
I tested using two Windows VM on top of VMware Fusion,
creating two e1000 devices with device ID 0x10D3 (8274L),
verifying rx/tx works correctly using dpdk-testpmd.exe
rxonly and txonly mode.

Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
Tested-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tested-by: Pallavi Kadam <pallavi.kadam@intel.com>
2021-10-21 04:58:40 +02:00
Tudor Cornea
2108930be1 net/ixgbe: fix port initialization if MTU config fails
On a VMware ESXi 6.0 setup with an Intel 82599 NIC the ports don't
seem to initialize anymore, while running testpmd.

Configuring Port 0 (socket 0)
ixgbevf_dev_rx_init(): Set max packet length to 1518 failed.
ixgbevf_dev_start(): Unable to initialize RX hardware (-22)
Fail to start port 0: Invalid argument
Configuring Port 1 (socket 0)
ixgbevf_dev_rx_init(): Set max packet length to 1518 failed.
ixgbevf_dev_start(): Unable to initialize RX hardware (-22)
Fail to start port 1: Invalid argument
Please stop the ports first

If the call to ixgbevf_rlpml_set_vf fails and we return prematurely,
we will not be able to initialize the ports correctly.

The behavior seems to have changed since the following commit:

Fixes: c77866a169 ("net/ixgbe: detect failed VF MTU set")
Cc: stable@dpdk.org

We can make this particular use case work correctly if we don't
return an error, which seems to be consistent with the overall
kernel ixgbevf implementation.

[1]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c?h=v5.14#n2015

Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-21 04:56:06 +02:00
Rongwei Liu
a89f6433aa net/mlx5: set Tx queue affinity in round-robin
Previously, we set txq affinity to 0 and let firmware
to perform round-robin when bonding. Firmware uses a
global counter to assign txq affinity to different
physical ports accord to remainder after division.

There are three dis-advantages:
1. The global counter is shared between kernel and dpdk.
2. After restarting pmd or port, the previous counter value
is reused, so the new affinity is unpredictable.
3. There is no way to get what affinity is set by firmware.

In this update, we will create several TISs up to the
number of bonding ports and bind each TIS to one PF port.

For each port, it will start to pick up TIS using its port
index. Upper layer application can quickly calculate each txq's
affinity without querying.

At DPDK layer, when creating txq with 2 bonding ports, the
affinity is set like:
port 0: 1-->2-->1-->2
port 1: 2-->1-->2-->1
port 2: 1-->2-->1-->2

Note: Only applicable to DevX api.
This affinity subjects to HW hash.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-21 12:37:00 +02:00
Rongwei Liu
cf5ac38d51 common/mlx5: add LAG context query
Added a new function mlx5_devx_cmd_query_lag() to query LAG
property from firmware including state/affinity/mode etc.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-21 12:36:57 +02:00
Dmitry Kozlyuk
ea823b2c51 net/mlx5: close tools socket with last device
MLX5 PMD exposes a socket for external tools to dump port state.
Socket events are listened using an interrupt source of EXT type.
The socket was closed and the interrupt callback was unregistered
at program exit, which is incorrect because DPDK could be already
shut down at this point. Move actions performed at program exit
to the moment the last MLX5 port is closed. The socket will be opened
again if later a new MLX5 device is plugged in and probed.
Also fix comments that were decisively talking
about secondary processes instead of external tools.

Fixes: e6cdc54cc0 ("net/mlx5: add socket server for external tools")
Cc: stable@dpdk.org

Reported-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-10-21 10:31:53 +02:00
Dmitry Kozlyuk
9ec1ceab76 net/mlx5: fix Rx queue resource cleanup
mlx5_rxq_start() allocates rxq_ctrl->obj and frees it on failure,
but did not set it to NULL. Later mlx5_rxq_release() could not recognize
this object is already freed and attempted to release its resources,
resulting in a crash:

    Configuring Port 0 (socket 0)
    mlx5_common: Failed to create RQ using DevX
    mlx5_common: Can't create DevX RQ object.
    mlx5_net: Port 0 Rx queue 0 RQ creation failure.
    Segmentation fault

Set rxq_ctrl->obj to NULL after it is freed to skip resource release.

Fixes: 1260a87b28 ("net/mlx5: share Rx control code")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-21 09:31:17 +02:00
Bing Zhao
273b09376c net/mlx5: fix meter yellow policy with RSS action
The RSS configuration in a policy action container was a pointer
inside a union, and the pointer area could be used as other fate
action. In the current implementation, the RSS of the green color
was prior to that of the yellow color. There was a high possibility
the pointer was considered as the RSS and result in a error flow
expansion when only the yellow color had the RSS action.

The check of the fate action type should also be done to get rid of
the misjudgment.

Fixes: b38a12272b ("net/mlx5: split meter color policy handling")
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-21 09:31:15 +02:00
Xueming Li
614966c2fa net/mlx5: check DevX to support more Verbs ports
Verbs API doesn't support device port number larger than 255 by design.

To support more VF or SubFunction port representors, forces DevX API
check when max Verbs device link ports larger than 255.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:14 +02:00
Xueming Li
686d05b60d net/mlx5: enable DevX Tx queue creation
Verbs API does not support Infiniband device port number larger 255 by
design. To support more representors on a single Infiniband device DevX
API should be engaged.

While creating Send Queue (SQ) object with Verbs API, the PMD assigned
IB device port attribute and kernel created the default miss flows in
FDB domain, to redirect egress traffic from the queue being created to
representor appropriate peer (wire, HPF, VF or SF).

With DevX API there is no IB-device port attribute (it is merely kernel
one, DevX operates in PRM terms) and PMD must create default miss flows
in FDB explicitly. PMD did not provide this and using DevX API for
E-Switch configurations was disabled.

The default miss FDB flow matches E-Switch manager vport (to make sure
the source is some representor) and SQn (Send Queue number - device
internal queue index). The root flow table managed by kernel/firmware
and it does not support vport redirect action, we have to split the
default miss flow into two ones:

- flow with lowest priority in the root table that matches E-Switch
manager vport ID and jump to group 1.
- flow in group 1 that matches E-Switch manager vport ID and SQn and
forwards packet to peer vport

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:13 +02:00
Xueming Li
ebe9afedc7 net/mlx5: fix internal root table flow priority
When creating internal transfer flow on root table with lowest
priority, the flow was created with max UINT32_MAX priority. It is wrong
since the flow is created in kernel and  max priority supported is 16.

This patch fixes this by adding internal flow check.

Fixes: 5f8ae44dd4 ("net/mlx5: enlarge maximal flow priority")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:12 +02:00
Xueming Li
d9020f2577 net/mlx5: support flow item of normal Tx queue
Extends txq flow pattern to support both hairpin and regular txq.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:11 +02:00
Xueming Li
a564038699 net/mlx5: support E-Switch manager egress traffic match
For egress packet on representor, the vport ID in transport domain
is E-Switch manager vport ID since representor shares resources of
E-Switch manager. E-Switch manager vport ID and Tx queue internal device
index are used to match representor egress packet.

This patch adds flow item port ID match on E-Switch manager.

E-Switch manager vport ID is 0xfffe on BlueField, 0 otherwise.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:10 +02:00
Xueming Li
1d47e9335e net/mlx5: improve Verbs flow priority discovery
To detect number flow Verbs flow priorities, PMD try to create Verbs
flows in different priority. While Verbs is not designed to support
ports larger than 255.

When DevX supported by kernel driver, 16 Verbs priorities must be
supported, no need to create Verbs flows.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:09 +02:00
Xueming Li
3fd2961efa net/mlx5: use Netlink when IB port greater than 255
IB spec doesn't allow 255 ports on a single HCA, port number of 256 was
cast to u8 value 0 which invalid to ibv_query_port()

This patch invokes Netlink API to query port state when port number
greater than 255.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:08 +02:00
Xueming Li
227813f28a common/mlx5: get RDMA port state via Netlink
Introduce netlink API to get RDMA port state.

Port state is retrieved based on RDMA device name and port index.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-21 09:31:03 +02:00
Jie Wang
f30157d988 net/iavf: support PPPoL2TPv2oUDP RSS Hash
Add support for PPP over L2TPv2 over UDP protocol RSS Hash based
on inner IP src/dst address and TCP/UDP src/dst port.

Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-21 14:15:59 +02:00
Tomasz Duszynski
77140af0b8 common/cnxk: add new PCI IDs to supported devices
CNF10KA does not differ it terms of RVU resources from
CN10KA platform hence add it to list of devices respective
drivers support.

Otherwise devices on CNF10KA are not probed even though
compatible drivers exist.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-21 13:43:56 +02:00
David Marchand
6353ff43a7 dma/idxd: fix build on Windows
Windows compilation gives us a splat:
In file included from ../drivers/dma/idxd/idxd_pci.c:10:
In file included from ..\drivers\dma\idxd/idxd_internal.h:11:
..\drivers\dma\idxd/idxd_hw_defs.h:46:21: error: expected member name or
 ';' after declaration specifiers
        uint16_t __reserved[13];
        ~~~~~~~~           ^
1 error generated.

Ironically, __reserved is probably a reserved token.
Some drivers that build fine on Windows have structs with a "reserved"
field, let's go with this.

Fixes: 82147042d0 ("dma/idxd: add datapath structures")

Signed-off-by: David Marchand <david.marchand@redhat.com>
2021-10-23 08:52:25 +02:00