Commit Graph

32589 Commits

Author SHA1 Message Date
Chengwen Feng
157e8326e9 dma/hisilicon: support vchan status query
This patch adds support for vchan-status ops.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
2022-06-07 12:41:06 +02:00
Chengwen Feng
e03c601acb dma/hisilicon: enhance CQ scan robustness
The CQ (completion queue) descriptors were updated by hardware, and then
scanned by driver to retrieve hardware completion status.

This patch enhances robustness by following:
1. replace while (true) with a finite loop to avoid potential dead loop.
2. check the csq_head field in CQ descriptor to avoid status array
overflows.

Fixes: 2db4f0b823 ("dma/hisilicon: add data path")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
2022-06-07 12:40:25 +02:00
Chengwen Feng
f25265f004 test/dma: check index when no DMA completed
If no DMA request is completed, the ring_idx of the last completed
operation need returned by last_idx parameter. This patch adds
testcase for it.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Tested-by: Kevin Laatz <kevin.laatz@intel.com>
2022-06-07 12:38:13 +02:00
Chengwen Feng
2301dee970 dma/hisilicon: fix index returned when no DMA completed
If no DMA request is completed, the ring_idx of the last completed
operation need returned by last_idx parameter. This patch fixes it.

Fixes: 2db4f0b823 ("dma/hisilicon: add data path")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
2022-06-07 12:35:38 +02:00
Chengwen Feng
bebbf07219 examples/dma: add force minimal copy size parameter
This patch adds force minimal copy size parameter
(-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy
size will be the maximum of mbuf's data_len and this parameter.

This parameter was designed to compare the performance between CPU copy
and DMA copy. User could send small packets with a high rate to drive
the performance test.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
2022-06-06 23:32:32 +02:00
Chengwen Feng
7d3cb76fba examples/dma: fix Tx drop statistics
The Tx drop statistic was designed to be collected by
rte_eth_dev_tx_buffer mechanism, but the application uses
rte_eth_tx_burst to send packets and this lead the Tx drop statistic
was not collected.

This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem.

Fixes: 632bcd9b5d ("examples/ioat: print statistics")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
2022-06-06 23:31:33 +02:00
Huisong Li
e0e95de2be examples/dma: fix MTU configuration
The MTU in dma App can be configured by 'max_frame_size' parameters which
have a default value(1518). It's not reasonable to use it directly as MTU.
This patch fix it.

Fixes: 1bb4a528c4 ("ethdev: fix max Rx packet length")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
2022-06-06 23:31:31 +02:00
Sean Morrissey
39b5ab60df dmadev: add telemetry
Telemetry commands are now registered through the dmadev library
for the gathering of DSA stats. The corresponding callback
functions for listing dmadevs and providing info and stats for a
specific dmadev are implemented in the dmadev library.

An example usage can be seen below:

Connecting to /var/run/dpdk/rte/dpdk_telemetry.v2
{"version": "DPDK 22.03.0-rc2", "pid": 2956551, "max_output_len": 16384}
Connected to application: "dpdk-dma"
--> /
{"/": ["/", "/dmadev/info", "/dmadev/list", "/dmadev/stats", ...]}
--> /dmadev/list
{"/dmadev/list": [0, 1]}
--> /dmadev/info,0
{"/dmadev/info": {"name": "0000:00:01.0", "nb_vchans": 1, "numa_node": 0,
"max_vchans": 1, "max_desc": 4096, "min_desc": 32, "max_sges": 0,
"capabilities": {"mem2mem": 1, "mem2dev": 0, "dev2mem": 0, ...}}}
--> /dmadev/stats,0,0
{"/dmadev/stats": {"submitted": 0, "completed": 0, "errors": 0}}

Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
Tested-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
2022-06-06 23:31:29 +02:00
Bruce Richardson
e345594f3c dmadev: clarify visibility of completed jobs
Clarify that once an operation has completed, the output of that
operation is visible to all cores.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
2022-06-06 23:31:23 +02:00
Thomas Monjalon
327ef50659 kni: fix build
A previous fix had #else instead of #endif.
The error message is:
	kernel/linux/kni/kni_net.c: In function ‘kni_net_rx_normal’:
	kernel/linux/kni/kni_net.c:448:2: error: #else after #else

Bugzilla ID: 1025
Fixes: c98600d4be ("kni: fix build with Linux 5.18")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2022-06-06 12:49:51 +02:00
Raja Zidane
fb96caa56a net/mlx5: support ESP item on Windows
ESP item is not supported on Windows, yet it is expanded from the
expansion graph when trying to create default flow to RSS all packets.

Support ESP item match (without ability to match on SPI field on Windows).
Split ESP validation per OS.

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-05 17:04:48 +02:00
Michael Baum
2192599c75 net/mlx5: fix entry size in construct data ipool
The mlx5_action_construct_data structure memory is managed by ipool
named acts_ipool.

The size of one entry in this ipool is mistakenly defined as size of
rte_flow_hw structure.
This size is used to reset in the allocated part. When the size is
incorrect it resets memory that does not belong to it.

This patch defines the correct size.

Fixes: f13fab2392 ("net/mlx5: add flow jump action")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-05 17:04:46 +02:00
Suanming Mou
dfa2f53387 common/mlx5: remove unused lcore check
While non-lcore list operations were supported, non-lcore index will
be converted to MLX5_LIST_NLCORE. In that case, no need to check the
lcore index be -1 or not anymore.

This commit removes the unused lcore check in list.

Fixes: 7e1cf89271 ("common/mlx5: support list non-lcore operations")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-05 17:04:43 +02:00
Qi Zhang
c4716123a1 net/iavf: remove dead code
Remove unimplemented function call be wrapped by
RTE_LIBRTE_IAVF_DEBUG_TX_DESC_RING

Fixes: 1e728b0112 ("net/iavf: rework Tx path")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2022-05-30 15:29:51 +02:00
Qiming Yang
be7226980c net/iavf: increase reset complete wait count
Kernel iavf driver has sent patch to increase the completion
wait time to reduce the "Reset never finished" case.
Follow this action in DPDK iavf driver.
Kernel reference commit:
8e3e4b9da7e6 ("iavf: increase reset complete wait time")

Fixes: 22b123a36d ("net/avf: initialize PMD")
Cc: stable@dpdk.org

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-30 13:39:04 +02:00
Wenjing Qiao
23e35ac5f9 net/ice: fix outer L4 checksum in scalar Rx
In scalar datapath, ol_flag shows RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
which is error, therefore fixing this bug.

Fixes: 94005e4640 ("net/ice: fix build with 16-byte Rx descriptor")
Cc: stable@dpdk.org

Signed-off-by: Wenjing Qiao <wenjing.qiao@intel.com>
Reported-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-30 00:39:01 +02:00
Wenjun Wu
1c735a52b3 net/iavf: fix initialization with quanta configuration
When kernel driver does not support quanta size configuration,
it will return error. We do not expect it to occur in default
initialization process.

Fixes: b14e8a57b9 ("net/iavf: support quanta size configuration")

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-30 00:33:24 +02:00
Qiming Yang
d9934a8a3d net/igc: support I226 devices
Added I226 Series device ID in igc driver and updated igc guide
document for new devices.

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-25 10:52:46 +02:00
Radu Nicolau
5933c656b9 net/iavf: fix device stop
Move security context destroy from device stop to device close function.
Deleting the context on device stop can prevent the application from
properly cleaning and releasing resources.

Fixes: 6bc987ecb8 ("net/iavf: support IPsec inline crypto")
Cc: stable@dpdk.org

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-25 10:50:55 +02:00
Radu Nicolau
3940cd9b8c net/iavf: fix device initialization without inline crypto
When the inline crypto feature VF capability flag is set also check if the
feature is enabled, otherwise the initialization will fail even when
the inline crypto is not required.

Fixes: 6bc987ecb8 ("net/iavf: support IPsec inline crypto")
Cc: stable@dpdk.org

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Wenjun Wu
149280731b net/iavf: fix race condition with Rx timestamp offload
In multi-cores cases for Rx timestamp offload, if packets arrive
too fast, aq command to get phc time will be pended.

This patch adds spinlock to fix this issue. To avoid phc time being
frequently overwritten, move related variables to iavf_rx_queue
structure, and each queue will handle timestamp calculation by itself.

Fixes: b5cd735132 ("net/iavf: enable Rx timestamp on flex descriptor")
Fixes: 33db16136e ("net/iavf: improve performance of Rx timestamp offload")

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Ting Xu
bc0e85586e net/iavf: support VF RSS flow rule with raw pattern
Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports
raw pattern flow rule creation in VF based on Parser Library feature. VF
parses the spec and mask input of raw pattern, and passes it to kernel
driver to create the flow rule. Current rte_flow raw API is utilized.

command example:
RSS hash for ipv4-src-dst:
flow create 0 ingress pattern raw pattern spec
00000000000000000000000008004500001400004000401000000000000000000000
pattern mask
0000000000000000000000000000000000000000000000000000ffffffffffffffff /
end actions rss queues end / end

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Junfeng Guo
444a7d096e net/iavf: enable flow rule with raw pattern
This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow
Director (FDIR) in AVF, based on the Parser Library feature and the
existing rte_flow `raw` API.

The input spec and mask of raw pattern are first parsed via the
Parser Library, and then passed to the kernel driver to create the
flow rule.

Similar as ice PMD's implemnentation, each raw flow requires:
1. A byte string of raw target packet bits.
2. A byte string contains mask of target packet.

Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:

flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end

Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Junfeng Guo
bdd7558f5b common/iavf: support raw packet in protocol header
The patch extends existing virtchnl_proto_hdrs structure to allow VF
to pass a pair of buffers as packet data and mask that describe
a match pattern of a filter rule. Then the kernel PF driver is requested
to parse the pair of buffer and figure out low level hardware metadata
(ptype, profile, field vector.. ) to program the expected FDIR or RSS
rules.

Also update the proto_hdrs template init to align the virtchnl changes.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Qiming Yang
c43bfb7d59 doc: update matching versions in i40e guide
Add recommended matching list for i40e PMD in DPDK 21.05,
21.08, 21.11 and 22.03. And add a known issue when FW upgrade
to a version 8.4 and higher

Cc: stable@dpdk.org

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Ke Zhang
a08f9cb698 net/iavf: fix Rx queue interrupt setting
For Rx-Queue Interrupt Setting, when VF Rx interrupt
disable (INTENA=0), there are two ways to write back
descriptor to host memory:

1) Set WB_ON_ITR bit 0 to Interrupt Dynamic Control Register:
Completed descriptors are posted to host memory according to
the internal descriptor cache policy (in other words when a
full cache line is available for write-back).

A internal descriptor size is 16 bytes or 32 bytes, a cache
line size is 64 bytes or 128 bytes from datasheet :
PCIe Global Config 2 - GLPCI_CNF2 (0x000BE004; RO)
so the full cache line could contains 4 packets, it means
Network card will send 4 packets to host when a full cache line
is available.

2) Set WB_ON_ITR bit 1 to Interrupt Dynamic Control Register:
Completed descriptors also trigger the ITR. Following ITR
expiration, all leftover completed descriptors are posted to
host memory.

Network card will send packet to host even if only one
descriptor is completed.

Changing 1) to 2) to make sure VF send the packet to host even
if there is only one Rx packet is ready in hardware.

Fixes: d6bde6b5ea ("net/avf: enable Rx interrupt")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Ke Zhang
fced83c122 net/iavf: fix mbuf release in multi-process
In the multiple process environment, the subprocess operates on the
shared memory and changes the function pointer of the main process,
resulting in the failure to find the address of the function when main
process releasing, resulting in crash.

Fixes: 319c421f38 ("net/avf: enable SSE Rx Tx")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Qiming Yang
1fa739c3f2 net/iavf: fix queue start exception handling
If any queue start fail during dev_start, all started queues
should be stopped.

Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Cc: stable@dpdk.org

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Wenxuan Wu
2184f7cdee net/i40e: fix max frame size config at port level
Previously, max frame size can only be set when link is up, and the wait
time is 1 sec. Startup time of 10G_BASET longer than 1s would result in
failure.

Actually, max frame size of media type I40E_MEDIA_TYPE_BASET can be set
regardless of link status.

This patch omitted the link status check of 10G_MEDIA_TYPE_BASET.

Fixes: a4ba773679 ("net/i40e: enable maximum frame size at port level")
Cc: stable@dpdk.org

Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Acked-by: Yuying Zhang <yuying.zhang@intel.com>
2022-05-24 04:53:37 +02:00
Yiding Zhou
676d986b4b net/iavf: fix crash after VF reset failure
Some pointers will be set to NULL when iavf_dev_reset() failed,
for example vf->vf_res, vf->vsi_res vf->rss_key and etc.
APIs access these NULL pointers will trigger segfault.

This patch adds closed flag to indicate that the VF is closed,
and rejects API calls in this state to avoid coredump.

Fixes: e74e1bb628 ("net/iavf: enable port reset")
Cc: stable@dpdk.org

Signed-off-by: Yiding Zhou <yidingx.zhou@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-19 11:20:36 +02:00
Kevin Liu
24e6e0363e net/ice: fix MTU info for DCF
In the DCF module, Missing maximum and minimum
MTU value settings.

This patch adds the settings of the maximum and
minimum MTU to correctly calculate the MTU value.

Fixes: bf89db4409 ("net/ice: complete device info get in DCF")
Cc: stable@dpdk.org

Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-19 11:20:36 +02:00
Yuying Zhang
8b95092b7f net/ice/base: fix direction of flow that matches any
The tx/rx packets were both dropped when creating drop any rule
for ingress direction only, the root cause is the recipe didn't
contain direction flag matching.

This patch adds the packet flag which represents the direction of
source interface to solve the issue.

Fixes: 92317961a7 ("net/ice: support drop any and steer all to queue")
Cc: stable@dpdk.org

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-19 11:20:36 +02:00
Wenjun Wu
b375bfd2cb net/ice: add warning for unsupported TM configuration
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:23:11 +02:00
Wenjun Wu
2660b8b329 net/ice: support queue weight configuration
This patch adds queue weight configuration support.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:23:11 +02:00
Wenjun Wu
eca9d161bd net/ice: support queue and queue group priority config
This patch adds queue and queue group priority configuration
support. The highest priority is 0, and the lowest priority
is 7.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:23:11 +02:00
Ting Xu
8c481c3bb6 net/ice: support queue and queue group bandwidth limit
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues and queue group. To set up the exact queue
group, we need to reconfigure topology by delete and then recreate
queue nodes. Only TC0 is valid.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:23:10 +02:00
Wenjun Wu
6baa15684c net/ice/base: support priority configuration of exact node
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:22:43 +02:00
Wenjun Wu
803e5de0ce net/ice/base: support queue BW allocation configuration
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:22:43 +02:00
Wenjun Wu
8f7a83e193 net/ice/base: fix getting sched node from ID type
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.

This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.

Fixes: 93e84b1bfc ("net/ice/base: add basic Tx scheduler")
Cc: stable@dpdk.org

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-18 06:22:43 +02:00
Jeff Daly
0f9fb100f6 net/ixgbe: add option for link up check on pin SDP3
1ca05831b9 added a check that SDP3 (used as a TX_DISABLE output to the
SFP cage on these cards) is not asserted to avoid incorrectly reporting
link up when the SFP's laser is turned off.

ff8162cb95 limited this workaround to fiber ports

This patch:
* Adds devarg 'fiber_sdp3_no_tx_disable' not all fiber ixgbe devs use
  SDP3 as TX_DISABLE

Fixes: 1ca05831b9 ("net/ixgbe: fix link status")
Fixes: ff8162cb95 ("net/ixgbe: fix link status")
Cc: stable@dpdk.org

Signed-off-by: Jeff Daly <jeffd@silicom-usa.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-17 02:47:34 +02:00
Alvin Zhang
9a7980cfdf net/ice: complete VLAN offload capability for DCF
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.

The DCF needs to query the VLAN capabilities based on current device
configuration firstly.

DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.

When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-17 02:46:19 +02:00
Jie Wang
1a09fa53d8 net/ice/base: enable flow director for IPv6 next protocol
To support the new DDP and be compatible with the old version DDP
file, API function 'check_ddp_support_proto_id' is added to detect
if the required protocol ID is supported by the current DDP file.

Add new protocol ID IPV6_NEXT_PROTO support for PF FDIR if current
DDP is new DDP and keep behavior if it is the old version DDP.

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-17 02:44:21 +02:00
Yiding Zhou
865df516f9 net/iavf: fix data path selection
If PF driver don't support a flex Rx descriptor that required by VF,
legacy descriptor format will be negotiated to configure the hardware
queue.

The patch fixes the issue that an Rx data path that handle flexible
descriptor  (e.g.:
iavf_recv_scattered_pkts_vec_avx512_flex_rxd) is selected while the
actual hardware queues are configured as legacy due to above scenario,
which will cause following coredump.

Fixes: 12b435bf8f ("net/iavf: support flex desc metadata extraction")
Cc: stable@dpdk.org

Signed-off-by: Yiding Zhou <yidingx.zhou@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-17 02:43:08 +02:00
Wenjun Wu
f3548646db net/iavf: fix memory leak
Set an invalid quanta size from devargs will cause memory leak and this
is reported by coverity.

The patch fix the issue by correcting the error handle.

Coverity issue: 378017
Fixes: b14e8a57b9 ("net/iavf: support quanta size configuration")

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-05-17 02:40:09 +02:00
Jiri Slaby
c98600d4be kni: fix build with Linux 5.18
Since commit 2655926aea9b (net: Remove netif_rx_any_context() and
netif_rx_ni().) in 5.18, netif_rx_ni() no longer exists as netif_rx()
can be called from any context. So define HAVE_NETIF_RX_NI for older
releases and call the appropriate function in kni_net.

netif_rx_ni() must be used on older kernel since netif_rx() might
might lead to deadlocks or other problems there.

Cc: stable@dpdk.org

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-06-05 10:04:53 +02:00
Geoffrey Le Gourriérec
eadc35df59 net/mlx5: fix statistics read on Linux
This patch encompasses a few fixes carried by a previous patch
that aimed to support bonding device stats counting.

- If mlx5_os_read_dev_stat fails, it returns 1 instead of a
  negative value, causing mlx5_xstats_get to return an invalid
  number of counters. Since this error is not blocking, do not
  mess ret value with mlx5_os_read_dev_stat returned value.

  This allows avoiding the very annoying log:
  "n_xstats != n_xstats_names => skipping"

- Invert the check for mlx5_os_read_dev_stat(), currently leading
  us to store the result if the function failed, and use a
  backup value if it succeeded, which is the opposite of what we
  actually want. Revert to the original (correct) test.

- Add missing test on _mlx5_os_read_dev_counters() to prevent
  using trash stats values.

Fixes: 7ed15acdcd ("net/mlx5: improve xstats of bonding port")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Signed-off-by: Geoffrey Le Gourriérec <geoffrey.le_gourrierec@6wind.com>
Tested-by: Bassam Zaid AlKilani <bzalkilani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-02 17:01:11 +02:00
Rongwei Liu
2bd03a4361 net/mlx5: add Rx drop counters to xstats
Add two kinds of Rx drop counters to DPDK xstats which are
physical port scope.

1. rx_prio[0-7]_buf_discard
   The number of unicast packets dropped due to lack of shared
   buffer resources.
2. rx_prio[0-7]_cong_discard
   The number of packets that is dropped by the Weighted Random
   Early Detection (WRED) function.

Prio[0-7] is determined by VLAN PCP value which is 0 by default.
Both counters are retrieved from kernel ethtool API which calls
PRM command finally.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-06-01 09:49:44 +02:00
Raja Zidane
1485d961e2 net/mlx5: fix Tx recovery
When an error occurs in Tx, and it is moved to ERROR state, it
is not recoverable, during recovery it's state cannot be modified
to INIT. to modify state from RESET to INIT, the port must be
passed in modify attributes, and in case of ERROR to READY
modification path, it was not provided.

Provide port number when changing state from RESET to INIT.

Fixes: 3a87b964ed ("net/mlx5: create Tx queues with DevX")
Cc: stable@dpdk.org

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
2022-06-01 09:49:42 +02:00
Shun Hao
96ca87da4f net/mlx5: validate yellow meter action
Yellow meter action support is added in meter hierarchy validation.
If one color uses meter action, the other can only use NULL action
or the same meter action. And only shared meter is supported.

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-01 09:49:41 +02:00
Shun Hao
3dc7afa2fa net/mlx5: support yellow meter action for hierarchy tag rule
When a hierarchy meter is shared by other ports, it's needed to iterate
all meter policies in hierarchy to create tag rules, to set packet with
next meter ID, which will be used by related meter drop count.
This patch adds the tag rule for yellow support in hierarchy, so both
green/yellow policy flows can set the correct meter ID.

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-01 09:49:38 +02:00