Commit Graph

13566 Commits

Author SHA1 Message Date
Bing Zhao
0ad28e873c net/mlx5: fix RSS consistency check of meter policy
After yellow color actions in the metering policy were supported,
the RSS could be used for both green and yellow colors and only the
queues attribute could be different.

When specifying the attributes of a RSS, some fields can be ignored
and some default values will be used in PMD. For example, there is a
default RSS key in the PMD and it will be used to create the TIR if
nothing is provided by the application.

The default value cases were missed in the current implementation
and it would cause some false positives or crashes.

The comparison function should be adjusted to take all cases into
consideration when RSS is used for both green and yellow colors.

Fixes: 4b7bf3ffb4 ("net/mlx5: support yellow in meter policy validation")
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-10 15:44:39 +01:00
Michael Baum
8451e165b8 net/mlx5: workaround MR creation for flow counter
Due to kernel driver / FW issues in direct MKEY creation using the DevX
API, this patch replaces the counter MR creation to use wrapped mkey
API.

Fixes: 5382d28c21 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
2021-11-10 15:50:44 +01:00
David Marchand
d6024c0a67 build: cleanup libpcap dependent components
The RTE_PORT_PCAP variable is used to signal libpcap availability,
though its name seems to refer to pcap support in the port library.
Prefer a generic name and add explicit link dependencies where needed.

Fixes: 7a944656b3 ("test/pcapng: test pcapng library")
Fixes: 2eccf6afbe ("bpf: add function to convert classic BPF to DPDK BPF")
Fixes: cbb44143be ("app/dumpcap: add new packet capture application")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
2021-11-10 11:42:34 +01:00
Ivan Malov
60e53c078d net/sfc: support decrement IP TTL actions in transfer flows
These actions map to MAE action DECR_IP_TTL. It affects
the outermost header in the current processing state of
the packet, which might have been decapsulated by prior
action DECAP. It also updates IPv4 checksum accordingly.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2021-11-08 16:25:51 +01:00
Chengwen Feng
247f0ce2a4 net/hns3: remove PF/VF duplicate code
This patch remove PF/VF duplicate code of:
1. get firmware version.
2. get device info.
3. rx interrupt related functions.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
f2d91cfbb5 net/hns3: mark unchecked return of snprintf
Fixing the return value of the function to clear static warning.

Fixes: 1181500b2f ("net/hns3: adjust MAC address logging")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
e6a27020e6 net/hns3: remove magic numbers
Removing magic numbers with macros.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Min Hu (Connor)
3600ffc9de net/hns3: move declarations in flow header file
This patch adds a hns3_flow.h to make the code easier to maintain.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
a4c7152d05 net/hns3: extract common code to its own file
This patch extracts a common file to store the common code for PF and VF
driver.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
72ec1486e9 net/hns3: use unsigned integer for bitwise operations
Bitwise operations should be used only with unsigned integer. This patch
modifies some code that does not meet this rule.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
cf31e4a7bb net/hns3: modify an indent alignment
This patch modifies some code alignment issues to make the code style
more consistent.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
dc153889d5 net/hns3: remove redundant function declaration
This patch removes a redundant function declaration for
hns3_rx_check_vec_support().

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Huisong Li
f658f41581 net/hns3: simplify queue DMA address arithmetic
The patch obtains the upper 32 bits of the Rx/Tx queue DMA address in one
step instead of two steps.

Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-11-08 15:59:14 +01:00
Dmitry Kozlyuk
077be91dd7 net/mlx5: fix split buffer Rx
Routine to lookup LKey on Rx was assuming that the mbuf address
always belongs to a single mempool: the one associated with an RxQ
or the MPRQ mempool. This assumption is false for split buffers case.
A wrong LKey was looked up, resulting in completion errors.
Modify lookup routines to lookup LKey in the mbuf->pool
for non-MPRQ cases both on Rx datapath and on queue initialization.

Fixes: fec28ca0e3 ("net/mlx5: support mempool registration")

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-08 13:56:29 +01:00
Harman Kalra
49fdb0ae0d net/mlx4: fix crash on allocation failure
This patch fixes coverity issue by adding a NULL check.

Coverity issue: 373687
Fixes: d61138d4f0 ("drivers: remove direct access to interrupt handle")

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2021-11-08 17:32:42 +01:00
Harman Kalra
aedd054c5c drivers: check interrupt file descriptor validity
This patch fixes coverity issue by adding a check for negative value to
avoid bad bit shift operation and other invalid use of file descriptors.

Coverity issue: 373717, 373697, 373685
Coverity issue: 373723, 373720, 373719, 373718, 373715, 373714, 373713
Coverity issue: 373710, 373707, 373706, 373705, 373704, 373701, 373700
Coverity issue: 373698, 373695, 373692, 373690, 373689
Coverity issue: 373722, 373721, 373709, 373702, 373696
Fixes: d61138d4f0 ("drivers: remove direct access to interrupt handle")

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2021-11-08 17:32:42 +01:00
Michael Baum
5dfa003db5 common/mlx5: fix post doorbell barrier
The rdma-core library can map doorbell register in two ways, depending
on the environment variable "MLX5_SHUT_UP_BF":

  - as regular cached memory, the variable is either missing or set to
    zero. This type of mapping may cause the significant doorbell
    register writing latency and requires an explicit memory write
    barrier to mitigate this issue and prevent write combining.

  - as non-cached memory, the variable is present and set to not "0"
    value. This type of mapping may cause performance impact under
    heavy loading conditions but the explicit write memory barrier is
    not required and it may improve core performance.

The UAR creation function maps a doorbell in one of the above ways
according to the system. In run time, it always adds an explicit memory
barrier after writing to.
In cases where the doorbell was mapped as non-cached memory, the
explicit memory barrier is unnecessary and may impair performance.

The commit [1] solved this problem for a Tx queue. In run time, it
checks the mapping type and provides the memory barrier after writing to
a Tx doorbell register if it is needed. The mapping type is extracted
directly from the uar_mmap_offset field in the queue properties.

This patch shares this code between the drivers and extends the above
solution for each of them.

[1] commit 8409a28573
    ("net/mlx5: control transmit doorbell register mapping")

Fixes: f8c97babc9 ("compress/mlx5: add data-path functions")
Fixes: 8e196c08ab ("crypto/mlx5: support enqueue/dequeue operations")
Fixes: 4d4e245ad6 ("regex/mlx5: support enqueue")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-07 16:21:03 +01:00
Michael Baum
b6e9c33c82 net/mlx5: remove duplicated reference of Tx doorbell
The Tx doorbell has different virtual addresses per process.
The secondary process takes the UAR physical page ID of the primary and
mmap it to its own virtual address.
The primary doorbell references were saved in two shared memory
locations: the TxQ structure and a dedicated doorbell array.

Remove the doorbell reference from the TxQ structure and move the
primary processes to take the UAR information from the primary doorbell
array.

Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-07 16:21:03 +01:00
Michael Baum
204891763c common/mlx5: make multi-process MR management port-agnostic
In the multi-process mechanism, there are things that the secondary
process does not perform itself but asks the primary process to perform
for it.
There is a special API for communication between the processes that
receives parameters necessary for the specific action required as well
as a special structure called mp_id that contains the port number of the
processes through which the initial process finds the relevant ETH
device for the processes.

One of the operations performed through this mechanism is the creation
of a memory region, where the secondary process sends the virtual
address as a parameter and the mp_id structure with the port number
inside it.
However, once the memory area management is shared between the drivers
and either port number or ETH device is no longer relevant to them, it
seems unnecessary to continue communicating between the processes
through the mp_id variable.

In this patch we will remove the use of the above structure for all MR
management, and add to the specific parameter of operations a pointer to
the common device that contains everything needed to create/register MR.

Fixes: 9f1d636f3e ("common/mlx5: share MR management")

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-07 14:12:08 +01:00
Michael Baum
334ed198ab common/mlx5: remove redundant parameter in MR search
Memory region management has recently been shared between drivers,
including the search for caches in the data plane.
The initial search in the local linear cache of the queue, usually
yields a result and one should not continue searching in the next level
caches.

The function that searches in the local cache gets the pointer to a
device as a parameter, that is not necessary for its operation
but for subsequent searches (which, as mentioned, usually do not
happen).
Transferring the device to a function and maintaining it, takes some
time and causes some impact on performance.

Add the pointer to the device as a field of the mr_ctrl structure. The
field will be updated during control path and will be used only when
needed in the search.

Fixes: fc59a1ec55 ("common/mlx5: share MR mempool registration")

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-11-07 14:11:16 +01:00
Bing Zhao
e848218741 net/mlx5: check delay drop settings in kernel driver
The delay drop is the common feature managed on per device basis
and the kernel driver is responsible one for the initialization and
rearming.

By default, the timeout value is set to activate the delay drop when
the driver is loaded.

A private flag "dropless_rq" is used to control the rearming. Only
when it is on, the rearming will be handled once received a timeout
event. Or else, the delay drop will be deactivated after the first
timeout occurs and all the Rx queues won't have this feature.

The PMD is trying to query this flag and warn the application when
some queues are created with delay drop but the flag is off.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-05 17:04:53 +01:00
Bing Zhao
febcac7b46 net/mlx5: support Rx queue delay drop
For the Ethernet RQs, if there all receiving descriptors are
exhausted, the packets being received will be dropped. This behavior
prevents slow or malicious software entities at the host from
affecting the network. While for hairpin cases, even if there is no
software involved during the packet forwarding from Rx to Tx side,
some hiccup in the hardware or back pressure from Tx side may still
cause the descriptors to be exhausted. In certain scenarios it may be
preferred to configure the device to avoid such packet drops,
assuming the posting of descriptors will resume shortly.

To support this, a new devarg "delay_drop" is introduced. By default,
the delay drop is enabled for hairpin Rx queues and disabled for
standard Rx queues. This value is used as a bit mask:
  - bit 0: enablement of standard Rx queue
  - bit 1: enablement of hairpin Rx queue
And this attribute will be applied to all Rx queues of a device.

The "rq_delay_drop" capability in the HCA_CAP is checked before
creating any queue. If the hardware capabilities do not support
this delay drop, all the Rx queues will still be created without
this attribute, and the devarg setting will be ignored even if it
is specified explicitly. A warning log is used to notify the
application when this occurs.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-05 17:04:53 +01:00
Jiawen Wu
b4ce1520c9 net/txgbe: fix link process in KR mode
Set the 'present' parameter to 0 by default. It is configured by hardware,
users can set it to 1 for manual configuration.

Fixes: f611dada1a ("net/txgbe: update link setup process of backplane NICs")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-11-05 15:10:21 +01:00
Jie Wang
8cc79a1636 net/i40e: fix forward outer IPv6 VXLAN
Testpmd forwards packets in checksum mode that it need to calculate
the checksum of each layer's protocol. Then it will fill flags and
header length into mbuf.

In process_outer_cksums, HW calculates the outer checksum if
tx_offloads contains outer UDP checksum otherwise SW calculates
the outer checksum.

When tx_offloads contains outer UDP checksum or outer IPv4 checksum,
mbuf will be filled with correct header length.

This patch added outer UDP checksum in tx_offload_capa and
I40E_TX_OFFLOAD_MASK, when we set csum hw outer-udp on that the
engine can forward outer IPv6 VXLAN packets.

Fixes: 7497d3e2f7 ("net/i40e: convert to new Tx offloads API")
Cc: stable@dpdk.org

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2021-11-05 05:31:22 +01:00
Viacheslav Ovsiienko
25ed2ebff1 net/mlx5: support shared Rx queue port data path
When receive packet, mlx5 PMD saves mbuf port number from
RxQ data.

To support shared RxQ, save port number into RQ context as user index.
Received packet resolve port number from CQE user index which derived
from RQ context.

Legacy Verbs API doesn't support RQ user index setting, still read from
RxQ port number.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:51 +01:00
Xueming Li
09c2555303 net/mlx5: support shared Rx queue
This patch introduces shared RxQ. All shared Rx queues with same group
and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.

Shared rxq_data is set into device Rx queues of all member ports as
RxQ object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:50 +01:00
Xueming Li
5cf0707fc7 net/mlx5: remove Rx queue data list from device
Rx queue data list(priv->rxqs) can be replaced by Rx queue
list(priv->rxq_privs), removes it and replaces with universal wrapper
API.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:49 +01:00
Xueming Li
5ceb3a02b0 net/mlx5: move Rx queue DevX resource
To support shared RX queue, moves DevX RQ which is per queue resource to
Rx queue private data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:48 +01:00
Xueming Li
5db77fef78 net/mlx5: remove port info from shareable Rx queue
To prepare for shared Rx queue, removes port info from shareable Rx
queue control.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:47 +01:00
Xueming Li
44126bd9d0 net/mlx5: move Rx queue hairpin info to private data
Hairpin info of Rx queue can't be shared, moves to private queue data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:47 +01:00
Xueming Li
0cedf34da7 net/mlx5: move Rx queue reference count
Rx queue reference count is counter of RQ, used to count reference to RQ
object. To prepare for shared Rx queue, this patch moves it from
rxq_ctrl to Rx queue private data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:46 +01:00
Xueming Li
4cda06c3c3 net/mlx5: split Rx queue into shareable and private
To prepare shared Rx queue, splits RxQ data into shareable and private.
Struct mlx5_rxq_priv is per queue data.
Struct mlx5_rxq_ctrl is shared queue resources and data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:45 +01:00
Xueming Li
53232e3b05 net/mlx5: clean Rx queue code
This patch removes unused Rx queue code.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:45 +01:00
Xueming Li
fdb67b84a5 net/mlx5: fix Rx queue memory allocation return value
If error happened during Rx queue mbuf allocation, boolean value
returned. From description, return value should be error number.

This patch returns negative error number.

Fixes: 0f20acbf5e ("net/mlx5: implement vectorized MPRQ burst")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:44 +01:00
Xueming Li
056c87d07d common/mlx5: support receive memory pool
The hardware Receive Memory Pool (RMP) object holds the destination for
incoming packets/messages that are routed to the RMP through RQs. RMP
enables sharing of memory across multiple Receive Queues. Multiple
Receive Queues can be attached to the same RMP and consume memory
from that shared poll. When using RMPs, completions are reported to the
CQ pointed to by the RQ, user index that set in RQ creation time is
carried to completion entry.

This patch enables RMP based RQ, RMP is created when mlx5_devx_rq.rmp is
set.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:43 +01:00
Xueming Li
68fa62924d net/mlx5: fix Altivec Rx
This patch fixes stale field reference.

Fixes: a18ac61133 ("net/mlx5: add metadata support to Rx datapath")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:41 +01:00
Gregory Etelson
a23e9b6e3e net/mlx5: handle flex item in flows
Provide flex item recognition, validation and translation
in flow patterns. Track the flex item referencing.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:41 +01:00
Viacheslav Ovsiienko
6dac7d7ff2 net/mlx5: translate flex item pattern into matcher
The matcher is an steering engine entity that represents
the flow pattern to hardware to match. It order to
provide match on the flex item pattern the appropriate
matcher fields should be configured with values and masks
accordingly.

The flex item related matcher fields is an array of eight
32-bit fields to match with data captured by sample registers
of configured flex parser. One packet field, presented in
item pattern can be split between several sample registers,
and multiple fields can be combined together into single
sample register to optimize hardware resources usage
(number os sample registers is limited), depending on field
modes, widths and offsets. Actual mapping is complicated
and controlled by special translation data, built by PMD
on flex item creation.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:40 +01:00
Viacheslav Ovsiienko
b293e8e49d net/mlx5: translate flex item configuration
RTE Flow flex item configuration should be translated
into actual hardware settings:

  - translate header length and next protocol field samplings
  - translate data field sampling, the similar fields with the
    same mode and matching related parameters are relocated
    and grouped to be covered with minimal amount of hardware
    sampling registers (each register can cover arbitrary
    neighbour 32 bits (aligned to byte boundary) in the packet
    and we can combine the fields with smaller lengths or
    segments of bigger fields)
  - input and output links translation
  - preparing data for parsing flex item pattern on flow creation

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:39 +01:00
Gregory Etelson
9086ac093a net/mlx5: add flex parser DevX object management
The DevX flex parsers can be shared between representors
within the same IB context. We should put the flex parser
objects into the shared list and engage the standard
mlx5_list_xxx API to manage ones.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:38 +01:00
Viacheslav Ovsiienko
db25cadc08 net/mlx5: add flex item operations
This patch is a preparation step of implementing
flex item feature in driver and it provides:

  - external entry point routines for flex item
    creation/deletion

  - flex item objects management over the ports.

The flex item object keeps information about
the item created over the port - reference counter
to track whether item is in use by some active
flows and the pointer to underlying shared DevX
object, providing all the data needed to translate
the flow flex pattern into matcher fields according
hardware configuration.

There is not too many flex items supposed to be
created on the port, the design is optimized
rather for flow insertion rate than memory savings.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:38 +01:00
Viacheslav Ovsiienko
575740d10e net/mlx5: update eCPRI flex parser structures
To handle eCPRI protocol in the flows the mlx5 PMD engages
flex parser hardware feature. While we were implementing
eCPRI support we anticipated the flex parser usage extension,
and all related variables were named accordingly, containing
flex syllabus. Now we are preparing to introduce more common
approach of flex item, in order to avoid naming conflicts
and improve the code readability the eCPRI infrastructure
related variables are renamed as preparation step.

Later, once we have the new flex item implemented, we could
consider to refactor the eCPRI protocol support  to move on
common flex item basis.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-04 22:55:37 +01:00
Somnath Kotur
7e8d5583f7 net/bnxt: fix scalar Rx datapath on Thor
The patch introduced by
commit 657c2a7f1d ("net/bnxt: create aggregation rings when needed")
ended up shortening the return code path from the function thereby
resulting in not executing the line of code at the end of the function
that was resetting the next consumer index to 0.
This would result in an application crash when error recovery or other
port stop/start scenarios were invoked on Thor which is what this
commit 61cd4384fa ("net/bnxt: fix crash after port stop/start")
was addressing.
Fix it by moving the resetting line of code before the return path
in the case when aggregation rings are not used (default case).

Fixes: 657c2a7f1d ("net/bnxt: create aggregation rings when needed")

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-05 02:00:49 +01:00
Jay Ding
f2c730d423 net/bnxt: use enum for bank ID
Instead of integer, using enum tf_sram_bank_id for bank
id in tf_set_sram_policy_parms.

Add index check against the allocation of the meter
instance for meter drop count because there is no
reason to access it if the corresponding meter
entry is not allocated.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Steve Rempe <steve.rempe@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:15:10 +01:00
Kishore Padmanabha
62d8961f10 net/bnxt: check mismatch of control and physical port
During the parsing of the ingress port ignore for a flow, added
check to match the control port and the physical port that is configured
to be ignored. If they do not match then the configuration to setup the
svif ignore shall fail.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:14:49 +01:00
Mike Baucom
eeebee0c49 net/bnxt: remove 2 slice wildcard entries
Remove 2-slice wildcard entries for scale.
The type-5 wildcard IPv6 flows are removed in order to increase
the scale for app-id=3.
The app no longer supports 2-slice wildcard entries.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:14:28 +01:00
Jay Ding
891f8260dd net/bnxt: add Tx TruFlow table config for P4 device
Add TX direction TruFlow table type config to be
compatible with other devices. For P4 device, the TX cfg
is duplicated from RX.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:14:11 +01:00
Jay Ding
d1bd289752 net/bnxt: support TruFlow and AFM SRAM partitioning
Implement set/get_sram_policy which support both Rx/Tx
direction truflow type the specific SRAM bank.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:13:53 +01:00
Jay Ding
4d05ce4e68 net/bnxt: add TruFlow API to get SRAM resources
Implement tf_get_sram_resources to return SRAM
partition information, including bank count and
SRAM profile number.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:13:34 +01:00
Kishore Padmanabha
8f142af983 net/bnxt: modify VF representor allocation sequence
When the VF representor interface is created, the VF pair relationship
is established between the VF and it is representor. If the pair
already exists then the pair needs to be deleted before allocation.
This could happen if the application is abruptly killed and restarted.
If the deletion of an existing VF rep is not done then hw pipeline is
not cleaned and a new allocation shall leave the hw in inconsistent
state.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-11-04 22:13:13 +01:00