12972 Commits

Author SHA1 Message Date
Robin Zhang
41515403f1 net/iavf: disable promiscuous mode on close
In scenario of Kernel Driver runs on PF and PMD runs on VF, PMD exit
doesn't disable promiscuous mode, this will cause vlan filter set by
Kernel Driver will not take effect.

This patch will fix it, add promiscuous disable at device disable.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-09 13:17:43 +02:00
Robin Zhang
1e4d55a7fe net/iavf: optimize promiscuous device operations
This patch is to improve efficiency and eliminate code
redundancy of promiscuous ops.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-09 13:17:43 +02:00
Robin Zhang
eb9444341f net/iavf: re-program promiscuous mode on VF interface
During a kernel PF reset, this event is propagated to the VF.
The DPDK VF PMD will execute the reset task before the PF is done
with his. This results in the admin queue message not being responded
to leaving the port in "promiscuous" mode.

This patch makes sure the promiscuous mode is configured independently
of the current admin state.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-09 13:17:43 +02:00
Robin Zhang
1541a9d6b0 net/iavf: set min and max MTU for VF
This commit sets the min and max supported MTU values for iavf VF
devices via the iavf_dev_info_get() function. Min MTU supported
is set to RTE_ETHER_MIN_MTU and max MTU is calculated as the max
packet length supported minus the transport overhead.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-09 13:17:43 +02:00
Robin Zhang
56addb5a23 net/iavf: use link status helper functions
Use new rte_eth_linkstatus_get/set helper functions to handle link
status update.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-09 13:17:43 +02:00
Jeff Guo
c02ea7410e net/iavf: fix flow flush after PF reset
When VF begin reset after PF reset, VF will be uninitialized at first
and then be initialized, during the time any invalid cmd such as flow
flush should not be sent to PF until the uninitialization is finished.

Fixes: 1eab95fe2e36 ("net/iavf: fix command after PF reset")
Cc: stable@dpdk.org

Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-09 13:17:43 +02:00
Yunjian Wang
9e4f075bc5 net/fm10k: fix memory leak when Tx thresh check fails
In fm10k_tx_queue_setup(), we allocate memory for the queue
structure but not released when Tx thresh check fails.

Fixes: 98068e0e044e ("fm10k: add Tx queue setup/release")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
2020-10-09 13:17:43 +02:00
Viacheslav Ovsiienko
482a1d34b6 common/mlx5: fix PCI address lookup
mlx5 PMDs use the mlx5_dev_to_pci_addr() routine to convert
Infiniband device name to the Bus-Device-Function location
on the PCI bus. The routine returned success even in case of
not found identification string. On caller side it likely
caused the wrong match with the BDF of previous device
resulting in wrong representor and master recognitions.

Fixes: 771fa900b73a ("mlx5: introduce new driver for Mellanox ConnectX-4 adapters")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-09 13:17:43 +02:00
Xueming Li
657df3ca0a net/mlx5: disable dump of Verbs flows
There was a segment fault when dump flows with device argument of
dv_flow_en=0. In such case, Verbs flow engine was enabled and fdb
resources were not initialized. It's suggested to use mlx_fs_dump
for Verbs flow dump.

This patch adds verbs engine check, prints warning message and return
gracefully.

Fixes: f6d7202402c9 ("net/mlx5: support flow dump API")
Cc: stable@dpdk.org

Reported-by: Jørgen Østergaard Sloth <jorgen.sloth@xci.dk>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-09 13:17:42 +02:00
Lance Richardson
369f6077c5 net/bnxt: support fast mbuf free
Add support for DEV_TX_OFFLOAD_MBUF_FAST_FREE to bnxt
vector mode transmit. This offload may be enabled
only when multi-segment transmit is not needed, all
transmitted mbufs for a given queue will be allocated
from the same pool, and all transmitted mbufs will
have a reference count of 1.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-09 13:17:42 +02:00
Kalesh AP
af57c49ca1 net/bnxt: fix link update
1. When port is stopped, we can forcibly set the link status for the
   device to down.
2. VFs and MH PFs do not have the privilege to bring the link down.
   As a result driver prints "Link Up" when port is stopped.
3. When driver receives link status/speed/config async event from fw,
   driver invokes bnxt_link_update() with exp_link_status as ETH_LINK_UP
   This is not logically correct as the async event could be for Link up
   or link down or for speed change.

Fixes: 074cacb9907a ("net/bnxt: fix link during port toggle")
Cc: stable@dpdk.org

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-09 13:17:42 +02:00
Michael Baum
e96242efa4 net/mlx5: remove Rx queue object type field
Once the separation between Verbs and DevX is done using function
pointers, the type field of the Rx queue object structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
4c6d80f1c5 net/mlx5: separate Rx queue state modification
Separate Rx state modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
354cc08a2d net/mlx5: remove Tx queue object type field
Once the separation between Verbs and DevX is done using function
pointers, the type field of the Tx queue object structure becomes
redundant and no more code is used.
Remove the unnecessary field from the structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
a9c7930662 net/mlx5: share Tx queue object modification
Use new modify_qp functions for Tx object creation in DevX and Verbs
modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
5d9f3c3f48 net/mlx5: separate Tx queue object modification
Separate Tx object modification to the Verbs and DevX modules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
e8390b3de0 net/mlx5: rearrange QP creation in Verbs module
1. Rename function to mention the internal resources.
2. Reduce the number of function arguments.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
88f2e3f18c net/mlx5: rearrange SQ and CQ creation in DevX module
1. Rename functions to mention the internal resources.
2. Reduce the number of function arguments.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
f49f44839d net/mlx5: share Tx control code
Move Tx object similar resources allocations and debug logs from DevX
and Verbs modules to a shared location.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
86d259cec8 net/mlx5: separate Tx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be
separated to another file.
By this way, the build can easily cut the unsupported Verbs APIs from
the compilation process.

Define operation structure and DevX module in addition to the existing
Linux Verbs module.
Separate Tx object creation into the Verbs/DevX modules and update the
operation structure according to the OS support and the user
configuration.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
e7055bbfbe net/mlx5: reposition event queue number field
The eqn field has become a field of sh directly since it is also
relevant for Tx and Rx.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
31d92b5892 net/mlx5: reorder Tx queue Verbs object creation
Move the creation of the completion queue from the mlx5_txq_obj_new
function into an auxiliary function.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
3d3cfe67e6 net/mlx5: reorder Tx queue DevX object creation
Move the creation of the send queue and the completion queue resources
from the mlx5_txq_obj_devx_new function into auxiliary functions.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
17a57183c0 net/mlx5: mitigate Tx queue reference counters
The Tx queue structures manage 2 different reference counter per queue:
txq_ctrl reference counter and txq_obj reference counter.

There is no real need to use two different counters, it just complicates
the release functions.
Remove the txq_obj counter and use only the txq_ctrl counter.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
fa2dd3d4d6 net/mlx5: remove unused variable in Tx queue creation
When a CQ is not created by DevX, it be allocated by either DV function
or by regular Verbs function.

The CQ DV attributes variable was wrongly defined and initialized in Tx
queue creation while the CQ is created by the regular Verbs function
what remained the attributes variable unused.

Remove the unused variable.

Fixes: faf2667fe8d5 ("net/mlx5: separate DPDK from verbs Tx queue objects")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Michael Baum
95b0e40b10 net/mlx5: fix send queue doorbell
As part of SQ creation for Tx queue objects, a HW doorbell memory should
be allocated and mapped to the HW.

The SQ doorbell handler was wrongly saved on the CQ fields what caused
wrong doorbell release in the Tx queue object destroy flow.

Save the SQ doorbell handler in the SQ fields.

Fixes: 3a87b964edd3 ("net/mlx5: create Tx queues with DevX")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-09 13:17:42 +02:00
Dekel Peled
c7870bfe09 ethdev: move RSS expansion code to mlx5 driver
Patch [1] added support for RSS flow expansion.
It was added in ethdev for public use, but until now it is used only
by MLX5 PMD.
To allow local changes in this code, this patch removes it from ethdev
and moves it to MLX5 PMD file.

[1] commit 4ed05fcd441b ("ethdev: add flow API to expand RSS flows")

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-08 19:58:11 +02:00
Rasesh Mody
effb1d0b95 net/qede: fix getting link details
This patch fixes get current link details, without this change the link
details can be inaccurate if proper lock is not acquired.

Fixes: 739a5b2f2b49 ("net/qede/base: use passed ptt handler")
Cc: stable@dpdk.org

Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
2020-10-08 19:58:11 +02:00
Alexander Kozyrev
d2d5760552 net/mlx5: fix Rx queue count calculation
There are a few discrepancies in the Rx queue count calculation.

The wrong index is used to calculate the number of used descriptors
in an Rx queue in case of the compressed CQE processing. The global
CQ index is used while we really need an internal index in a single
compressed session to get the right number of elements processed.

The total number of CQs should be used instead of the number of mbufs
to find out about the maximum number of Rx descriptors. These numbers
are not equal for the Multi-Packet Rx queue.

Allow the Rx queue count calculation for all possible Rx bursts since
CQ handling is the same for regular, vectorized, and multi-packet Rx
queues.

Fixes: 26f04883441a ("net/mlx5: support Rx queue count API")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-08 19:58:11 +02:00
Suanming Mou
3e8f3e51fd net/mlx5: fix meter table definitions
As metering and metadata features were developed at the same time. The
metering and metadata tables are defined conflicted.

This cause the meter suffix flow jump to the same metadata table and
cause flow deadloop.

Adjust the metering table define to fix that issue.

Fixes: 46a5e6bc6a85 ("net/mlx5: prepare meter flow tables")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-08 19:58:11 +02:00
Dekel Peled
38f9369d24 net/mlx5: fix DevX CQ attributes values
Previous patch wrongly used rdma-core defined values, when preparing
attributes for creating DevX CQ object.
This patch adds the correct value definition and uses them instead.

Fixes: 08d1838f645a ("net/mlx5: implement CQ for Rx using DevX API")
Cc: stable@dpdk.org

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-08 19:58:11 +02:00
Ajit Khaparde
8b96a65ce5 net/bnxt: update HWRM structures
HWRM API to a newer 1.10.1.70 version.

Few fields have been renamed because of this.
rx_err_pkt -> rx_discard_pkts
rx_drop_pkts -> rx_error_pkts

tx_err_pkts -> tx_discard_pkts
tx_drop_pkts -> tx_error_pkts

link_signal_mode -> active_fec_signal_mode

tx_bd_long_hi.mss -> tx_bd_long_hi.kid_or_ts_high_mss
tx_bd_long_hi.hdr_size -> tx_bd_long_hi.kid_or_ts_low_hdr_size

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-08 19:58:11 +02:00
Ajit Khaparde
7ed45b1a7c net/bnxt: support RSS hash selection
Add support to select RSS hash based on innermost or outermost
headers. If an application is started without any specific settings
the default mode configured by FW or HW shall be used.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-08 19:58:11 +02:00
Hongbo Zheng
e855bffa30 net/hns3: remove redundant return value assignment
When an error occurs in the reset process, -EIO is returned.
The assignment of ret here is redundant, so deleted it.

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Hongbo Zheng
243651cb6c net/hns3: check PCI config space reads
This patch add return value check when calling rte_pci_read_config
function.

Fixes: cea37e513329 ("net/hns3: fix FLR reset")
Cc: stable@dpdk.org

Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Chengchang Tang
fa29fe45a7 net/hns3: support queue start and stop
The new generation hns3 network engine supports independent enabling and
disabling of a single Tx/Rx queue. So, it can support the queue start
and stop feature. In addition, when different numbers of Tx and Rx
queues need to be enabled in some applications, hns3 pmd does not need
to create fake queues to enable these scenarios.

This patch Add queue start and stop feature for the new generation hns3
networking engine. Cancel the creation of fake queue on the new
generation network engine. And the previously improperly named queue
related function was renamed to improve readability.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Huisong Li
040bb0f725 net/hns3: set max scheduling rate based on actual board
Currently, max scheduling rates configuration of pg, pri and port are
set to 100000Mbps, which is the maximum bandwidth of hns3 network engine
with revision_id equals 0x21. However, max scheduling rate configuration
should be set to hardware based on the actual hardware board
environment.

The max_tm_rate in struct hns3_hw, meaning the rate, is obtained from
firmware. So we should use the variable to configure the max scheduling
rate.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Huisong Li
5d78d42b31 net/hns3: offload calculating shapping to firmware
In order to have more flexible selection of shapping algorithm based on
different versions of hns3 network engine, moves the algorithm of
calculating shapping parameter to firmware to execute. If bit
HNS3_TM_RATE_VLD_B of flag field of struct named hns3_pri_shapping_cmd,
hns3_pg_shapping_cmd or hns3_port_shapping_cmd is set to 1, firmware of
network engine, which device revision_id is greater than and equal to
0x30, will recalculate the shapping parameters according to the xxx_rate
field of struct hns3_xxx_shapping_cmd and the opcode of scheduling
level, and configure to hardware.

But driver still needs to calculate shapping parameters and configure
firmware, so as to be compatible with the network engine with
revision_id eqauls 0x21. And the rate and the flag will be ignored based
on the network engine with revision_id equals 0x21.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Wei Hu (Xavier)
f257760920 net/hns3: fix flow error type
The API of rte_flow_error_set is used to pass detail error information
to caller, this patch sets suitable type when calling rte_flow_error_set
API.

Fixes: fcba820d9b9e ("net/hns3: support flow director")
Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Wei Hu (Xavier)
f8f8df765f net/hns3: fix error type when validating RSS flow action
Because the macro named RTE_FLOW_ERROR_TYPE_ACTION_CONF indicates a
action configuration and the macro named RTE_FLOW_ERROR_TYPE_ACTION
indicates a specific action, the driver needs to return
RTE_FLOW_ERROR_ACTION_CONF type and notify the user when a RSS
configuration is invalid with actions list in the internal function
named hns3_parse_rss_filter called by the '.validate' ops implementation
function named hns3_flow_validate.

Besides, this patch removes some unnecessary judgment lines in
hns3_parse_rss_filter.

Fixes: c37ca66f2b27 ("net/hns3: support RSS")
Cc: stable@dpdk.org

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Wei Hu (Xavier)
76d794566d net/hns3: maximize queue number
The maximum number of queues for hns3 PF and VF driver is 64 based on
hns3 network engine with revision_id equals 0x21. Based on hns3 network
engine with revision_id equals 0x30, the hns3 PF PMD driver can support
up to 1280 queues, and hns3 VF PMD driver can support up to 128 queues.

The following points need to be modified to support maximizing queue
number and maintain better compatibility:
1) Maximizing the number of queues for hns3 PF and VF PMD driver In
   current version, VF is not supported when PF is driven by hns3 PMD
   driver. If maximum queue numbers allocated to PF PMD driver is less
   than total tqps_num allocated to this port, all remaining number of
   queues are mapped to VF function, which is unreasonable. So we fix
   that all remaining number of queues are mapped to PF function.

   Using RTE_LIBRTE_HNS3_MAX_TQP_NUM_PER_PF which comes from
   configuration file to limit the queue number allocated to PF device
   based on hns3 network engine with revision_id greater than 0x30. And
   PF device still keep the maximum 64 queues based on hns3 network
   engine with revision_id equals 0x21.

   Remove restriction of the macro HNS3_MAX_TQP_NUM_PER_FUNC on the
   maximum number of queues in hns3 VF PMD driver and use the value
   allocated by hns3 PF kernel netdev driver.

2) According to the queue number allocated to PF device, a variable
   array for Rx and Tx queue is dynamically allocated to record the
   statistics of Rx and Tx queues during the .dev_init ops
   implementation function.
3) Add an extended field in hns3_pf_res_cmd to support the case that
   numbers of queue are greater than 1024.
4) Use new base address of Rx or Tx queue if QUEUE_ID of Rx or Tx queue
   is greater than 1024.
5) Remove queue id mask and use all bits of actual queue_id as the
   queue_id to configure hardware.
6) Currently, 0~9 bits of qset_id in hns3_nq_to_qs_link_cmd used to
   record actual qset id and 10 bit as VLD bit are configured to
   hardware. So we also need to use 11~15 bits when actual qset_id is
   greater than 1024.
7) The number of queue sets based on different network engine are
   different. We use it to calculate group number and configure to
   hardware in the backpressure configuration.
8) Adding check operations for number of Rx and Tx queue user configured
   when mapping queue to tc Rx queue numbers under a single TC must be
   less than rss_size_max supported by a single TC. Rx and Tx queue
   numbers are allocated to every TC by average. So Rx and Tx queue
   numbers must be an integer multiple of 2, or redundant queues are not
   available.
9) We can specify which packets enter the queue with a specific queue
   number, when creating flow table rules by rte_flow API. Currently,
   driver uses 0~9 bits to record the queue_id. So it is necessary to
   extend one bit field to record queue_id and configure to hardware, if
   the queue_id is greater than 1024.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Huisong Li
9a7d3af22c net/hns3: expand number of queues for one TC up to 512
The maximum number of queues for one TC hns3 PF PMD driver supported is
64 based on hns3 network engine with revision_id equals 0x21, while it
is expanded up to 512 on hns3 network engine with revision_id equals
0x30.

So the following points need to be modified to maintain better
compatibility.
1) Using a extended rss_size_max field as the maximum queue number of
   one TC PF driver supported.
2) The data type of the RSS redirection table needs to be changed from
   uint8_t to uint16_t.
3) rss_tc_mode modification
   The bitwidth of tc_offset, meaning the rx queue index, has to expand
   from 10 bit to 11 bits. The tc_size, meaning the exponent with base 2
   of queues supported on TC, needs to expand from 3 bits to 4 bits.
4) RSS indirection table modification
   Currently, a field with 7 bits width is used to record the queue
   index for RSS indirection table. It means that PF needs to expand the
   queue index field to 9 bits. As the RSS indirection table config
   command reserved 4 bytes to configure the RSS queue index, a extern
   field can be added. So an entries of RSS indirection table queue
   index has two fields to set: rss_result_l and rss_result_h, while
   rss_result_l records the lower 8 bits and rss_result_h records the
   higher 1 bit.

In addition, 2~4 modifications is also compatible with hns3 VF PMD
driver.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
John Daley
bb66d562ae net/enic: share flow actions with same signature
Flow actions are a limited resource on the Cisco VIC, but they
can be shared between flows if they are exactly the same.
Use a hash table and a reference count in the PMD to enable sharing
actions with the same signature between flows.

Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
2020-10-08 19:58:10 +02:00
Kevin Laatz
2ae23f5647 raw/ioat: add fill operation
Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
3a377b10c2 raw/ioat: clean up use of common test function
Now that all devices can pass the same set of unit tests, eliminate the
temporary idxd_rawdev_test function and move the prototype for
ioat_rawdev_test to the proper internal header file, to be used by all
device instances.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
60927cc650 raw/ioat: add xstats tracking for idxd device
Add update of the relevant stats for the data path functions and point the
overall device struct xstats function pointers to the existing ioat
functions.

At this point, all necessary hooks for supporting the existing unit tests
are in place so call them for each device.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
a32e194474 raw/ioat: move xstats functions to common file
The xstats functions can be used by all ioat devices so move them from the
ioat_rawdev.c file to ioat_common.c, and add the function prototypes to the
internal header file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
8636b9a18e raw/ioat: create separate statistics structure
Rather than having the xstats as fields inside the main driver structure,
create a separate structure type for them.

As part of the change, when updating the stats functions referring to the
stats by the old path, we can simplify them to use the id to directly index
into the stats structure, making the code shorter and simpler.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
2e35907532 raw/ioat: add info query for idxd device
Add the info get function for DSA devices, returning just the ring size
info about the device, same as is returned for existing IOAT/CBDMA devices.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
78ecbc66ec raw/ioat: add data path for idxd device
Add support for doing copies using DSA hardware. This is implemented by
just switching on the device type field at the start of the inline
functions. Since there is no hardware which will have both device types
present this branch will always be predictable after the first call,
meaning it has little to no perf penalty.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00