A local test found that repeated port start and stop operations during
the continuous SSE vector bufflist receiving process will cause the mbuf
resource to run out. The final positioning is when the port is stopped,
the mbuf of the pkt_first_seg pointer is not released. Resources leak.
The patch scheme is to judge whether the pointer is empty when the port
is stopped, and release the corresponding mbuf if it is not empty.
Fixes: 4861cde46116 ("i40e: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The VIRTCHNL_OP_QUERY_FDIR_FILTER opcode is not used, so remove it.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add TCP/UDP/SCTP header checksum field selectors, they can be used in
creating FDIR or RSS rules related to TCP/UDP/SCTP header checksum.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The QFI is 6-bit "QoS Flow Identifier" within the GTPU Extension Header.
Add virtchnl fields QFI of GTPU UL/DL for supporting the AVF FDIR.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Update MTU value based on PTP enable status and reserve eight
bytes in TX path to accommodate VLAN tags.
If PTP is enabled maximum allowed MTU is 9200 otherwise it's 9208.
Fixes: b5dc3140448e ("net/octeontx2: support base PTP")
Cc: stable@dpdk.org
Signed-off-by: Hanumanth Reddy Pothula <hpothula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adding a new callback for reading the link status. PF can read its
link status and can forward the same to VF once it comes up.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Currently link event is only sent to the PF by AF as soon as it comes
up, or in case of any physical change in link. PF will broadcast
these link events to all its VFs as soon as it receives it.
But no event is sent when a new VF comes up, hence it will not have
the link status.
Adding support for sending link status to the VF once it comes up
successfully.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add ROC API to configure dual VLAN tag addition and removal.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Since the addition of support for runtime queue setup,
receive queues that are started by default no longer
have the correct state. Fix this by setting the state
when a port is started.
Fixes: 0105ea1296c9 ("net/bnxt: support runtime queue setup")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
The RSS expansion is based on DFS algorithm to traverse over the possible
expansion paths.
The current implementation breaks out, if it reaches the terminator of
the "next nodes" array, instead of going backwards to try the next path.
For example:
testpmd> flow create 0 ingress pattern eth / ipv6 / udp / vxlan / end
actions rss level 2 types tcp end / end
The paths found are:
ETH IPV6 UDP VXLAN END
ETH IPV6 UDP VXLAN ETH IPV4 TCP END
ETH IPV6 UDP VXLAN ETH IPV6 TCP END
The traversal stopped after getting to the terminator of the next nodes
of the ETH node. It missed the rest of the nodes in the next nodes array
of the VXLAN node.
The fix is to go backwards when reaching the terminator of the current
level and find if there is a "next node" to start traversing a new path.
Using the above example, the flows will be:
ETH IPV6 UDP VXLAN END
ETH IPV6 UDP VXLAN ETH IPV4 TCP END
ETH IPV6 UDP VXLAN ETH IPV6 TCP END
ETH IPV6 UDP VXLAN IPV4 TCP END
ETH IPV6 UDP VXLAN IPV6 TCP END
The traversal will find additional paths, because it traverses through
all the next nodes array of the VXLAN node.
Fixes: 4ed05fcd441b ("ethdev: add flow API to expand RSS flows")
Cc: stable@dpdk.org
Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The RSS expansion algorithm is using a graph to find the possible
expansion paths. A graph node with the 'explicit' flag will be skipped,
if it is not found in the flow pattern.
The current implementation misses the case where the node with the
explicit flag is in the middle of the expanded path.
For example:
testpmd> flow create 0 ingress pattern eth / ipv6 / udp / vxlan / end
actions rss level 2 types tcp end / end
The VLAN node has the explicit flag, so it is currently included in the
expanded flow:
ETH IPV6 UDP VXLAN END
ETH IPV6 UDP VXLAN ETH VLAN IPV4 TCP END
ETH IPV6 UDP VXLAN ETH VLAN IPV6 TCP END
The fix is to skip the nodes with the explicit flag while iterating over
the possible expansion paths. Using the above example, the flows will be:
ETH IPV6 UDP VXLAN END
ETH IPV6 UDP VXLAN ETH IPV4 TCP END
ETH IPV6 UDP VXLAN ETH IPV6 TCP END
Fixes: 3f02c7ff6815 ("net/mlx5: fix RSS expansion for inner tunnel VLAN")
Cc: stable@dpdk.org
Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Use max-pkt-len only if jumbo frames offload is requested
since otherwise this field isn't valid.
Fixes: 8b90e4358112 ("net/virtio: set offload flag for jumbo frames")
Fixes: 4e8169eb0d2d ("net/virtio: fix Rx scatter offload")
Cc: stable@dpdk.org
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When virtio_init_queue returns error, the memory of vq is freed.
But the value of hw->vqs[queue_idx] does not restore.
If virtio_init_queue returns error, the memory of vq is freed again
in virtio_free_queues.
Fixes: 69c80d4ef89b ("net/virtio: allocate queue at init stage")
Cc: stable@dpdk.org
Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
The callfds[] array stores eventfds sequentially for Rx and Tx vq.
Fixes: 3d4fb6fd2505 ("net/virtio-user: support Rx interrupt")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
There is no reason to re-register a interrupt handler for LSC if this
feature was not requested in the first place.
A simple use case is when asking for Rx interrupts without LSC interrupt.
Fixes: 26b683b4f7d0 ("net/virtio: setup Rx queue interrupts")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Report max/min/align descriptors limits in device info get callback.
Before calling the callback, rte_eth_dev_info_get() provides
default values of nb_min as zero and nb_max as UINT16_MAX that are
not correct for the driver, so one can't rely on them.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Descriptors number may be set less than queue size for split queue
vectorized Rx path. Pointers to mbufs for received packets are
obtained from SW ring, that is initially filled with them in the end
of queue setup in virtio_dev_rx_queue_setup_finish(). The begin of the
SW ring filled up to the size of descriptors number. At queue size
offset from the begin of the SW ring pointers to some fake mbuf are also
set for wrapping purpose. So the ring may contains the hole of invalid
pointers from descriptors number offset to queue size offset, and split
vectorized Rx routines could write to the invalid addresses since they
use the ring up to the queue size. Fix this by setting descriptors
number to queue size on Rx queue setup.
Fixes: fc3d66212fed ("virtio: add vector Rx")
Cc: stable@dpdk.org
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Rx queue setup callback allows to use the whole ring when
descriptor number argument equals zero. There's no point to
handle zero in any way since RTE Rx queue setup function
rte_eth_rx_queue_setup() doesn't pass zero using fallback
values.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Rx queue setup finish function may report wrong number of
allocated mbufs in case of in-order feature. Fix the
function to not ignore allocation error and count only
successfully allocated number of buffers.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This API was introduced in 18.08, therefore removing
experimental tag to promote it to stable state.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Error occurs when configuring meson with --buildtype=minsize
with GCC 11.1.0:
drivers/vdpa/mlx5/mlx5_vdpa_mem.c: In function ‘mlx5_vdpa_mem_register’:
drivers/vdpa/mlx5/mlx5_vdpa_mem.c:183:24: error:
initialization of ‘uint64_t’ {aka ‘long unsigned int’} from ‘void *’
makes integer from pointer without a cast [-Werror=int-conversion]
| uint64_t gcd = NULL;
| ^~~~
drivers/vdpa/mlx5/mlx5_vdpa_mem.c:244:75: error:
‘mode’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
| klm_size = mode == MLX5_MKC_ACCESS_MODE_KLM ?
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| KLM_SIZE_MAX_ALIGN(empty_region_sz) : gcd;
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Error occurs when configuring meson with --buildtype=minsize
with GCC 11.1.0:
drivers/regex/mlx5/mlx5_regex_fastpath.c:398:17: error:
‘len’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
| complete_umr_wqe(qp, sq, &qp->jobs[mkey_job_id], sq->pi,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| klm_num, len);
| ~~~~~~~~~~~~~
drivers/regex/mlx5/mlx5_regex_fastpath.c:315:31: note: ‘len’ was declared here
| uint32_t klm_num = 0, len;
| ^~~
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Error occurs when configuring meson with --buildtype=minsize
with GCC 11.1.0:
In function ‘__internal_ram_wr_relaxed’,
inlined from ‘internal_ram_wr’ at ecore_int_api.h:166:2,
inlined from ‘qede_update_rx_prod.constprop’ at qede_rxtx.c:736:2:
drivers/net/qede/base/bcm_osal.h:136:9: error:
‘rx_prods’ is used uninitialized [-Werror=uninitialized]
| rte_write32_relaxed((_val), (_reg_addr))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ecore_int_api.h:151:17: note: in expansion of macro ‘DIRECT_REG_WR_RELAXED’
| DIRECT_REG_WR_RELAXED(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i],
| ^~~~~~~~~~~~~~~~~~~~~
drivers/net/qede/qede_rxtx.c: In function ‘qede_update_rx_prod.constprop’:
drivers/net/qede/qede_rxtx.c:724:33: note: ‘rx_prods’ declared here
| struct eth_rx_prod_data rx_prods = { 0 };
| ^~~~~~~~
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
Acked-by: Rasesh Mody <rmody@marvell.com>
The PCI and vdev bus drivers cannot be disabled for DPDK builds and
special logic is put in place to not skip them when they are specified
in the disable list. This logic is broken though, as the inclusion of
the driver-specific meson.build file is only included in the "else" leg
of the condition check. This means that when they are specified as
disabled the PCI and vdev buses are not disabled, but neither are their
source files compiled.
Fix this by moving the "subdir()" call into the next "if build" block,
ensuring that if not disabled the sources are always included. To take
account of the fact that the subdir call could itself disable the
driver, we add a break call into the following loop to ensure we quickly
fall through to the following block which stops processing appropriately
if the driver is disabled.
Fixes: 2e33309ebe03 ("config: enable/disable drivers in Arm builds")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
Apply the same fix that for iavf to DCF
commit ead06572bd8f ("net/iavf: fix performance with writeback policy")
Fixes: 4b0d391f0eab ("net/ice: add queue config in DCF")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Lijuan Tu <lijuan.tu@intel.com>
A local test found that repeated port start and stop operations during
the continuous SSE vector bufflist receiving process will cause the mbuf
resource to run out. The final positioning is when the port is stopped,
the mbuf of the pkt_first_seg pointer is not released. Resources leak.
The patch scheme is to judge whether the pointer is empty when the port
is stopped, and release the corresponding mbuf if it is not empty.
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
In the eth_ixgbevf_dev_init and eth_ixgbe_dev_init functions, memory is
allocated for the MAC address, and the address is stored in the
eth_dev->data->mac_addrs member variable. If the subsequent function is
abnormal, you need to use the rte_free function to release the MAC
address memory.
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
In the ixgbevf_dev_start function, after initializing the rxtx queue, if
an exception occurs in the subsequent function, the rxtx queue needs to
be released. The patch solves the problem of queue resource leakage.
Fixes: 0eb609239efd ("ixgbe: enable Rx queue interrupts for PF and VF")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
In the ixgbe_fdir_filter_init and ixgbe_l2_tn_filter_init functions,
after the hash handle is created, the handle is not released in
subsequent abnormal branches.
Fixes: 080e3c0ee989 ("net/ixgbe: store flow director filter")
Fixes: d0c0c416ef1f ("net/ixgbe: store L2 tunnel filter")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
DCF PMD needs to support rte_eth_dev_reset, the reason is when a DCF
instance is killed, all the flow rules still exists in hardware, when
DCF gets to reconnect, it already lost the flow context, and if the
application wants to create new rules, it may fail due to firmware
reports rules already exist.
The rte_eth_dev_reset API provides a more elegant way for the
application to reset DCF when reconnect happens.
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
According to flow action MARK definition, PMDs must set both
PKT_RX_FDIR and PKT_RX_FDIR_ID if the packet contains a mark.
Fixes: 1aacc3d388d3 ("net/sfc: support user mark and flag Rx for EF100")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
This patch fixes Tx push capability to be compatible with Kunpeng 920,
as Tx push is only supported on Kunpeng 930.
Fixes: 23e317dd1fbf ("net/hns3: support Tx push quick doorbell for performance")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This new taskqueue pair reset command is used incorrectly, resulting in
the new command not taking effect.
This patch fixes the incorrect use.
Fixes: 6911e7c22c61 ("net/hns3: fix long task queue pairs reset time")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The used_rx_queues only takes effect after device is started, and
its value is incorrect before the device is started. Therefore, it
is not suitable for flow action to use it to verify the queue index
before the device is started.
E.g. Enable dedicated queue in bonding device will configure a queue
flow action before start its slave devices. The above problem will
make this reasonable flow action configuration fail.
This patch use the nb_rx_queues from the configuration phase to
achieve verification.
Fixes: a951c1ed3ab5 ("net/hns3: support different numbers of Rx and Tx queues")
Fixes: f8e7fcbfd0b8 ("net/hns3: support flow action of queue region")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
Because alloc mbuf failed, the rx_nombuf field is counted.
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
When the port is probed, if the eth_from_pcaps function fails, the
previously opened pcap resources are not released, causing resource
leakage.
The patch solves the problem of resource leakage caused by abnormal
branch exit during the port probe process.
Fixes: 4c173302c307 ("pcap: add new driver")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The lock pdata->i2c_mutex is not released if the function return in
these two patched branches, which may lead to deadlock problem if
this lock is acquired again.
Bugzilla ID: 777
Fixes: 4ac7516b8b39 ("net/axgbe: add phy init and related APIs")
Cc: stable@dpdk.org
Signed-off-by: Chengfeng Ye <cyeaa@connect.ust.hk>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added macros to simplify print of MAC address.
The six bytes of a MAC address are extracted in
a macro here, to improve code readablity.
Signed-off-by: Aman Deep Singh <aman.deep.singh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added macro to print six bytes of MAC address.
The MAC addresses will be printed in upper case
hexadecimal format.
In case there is a specific check for lower case
MAC address, the user may need to make a change in
such test case after this patch.
Signed-off-by: Aman Deep Singh <aman.deep.singh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Call xsk_ring_prod__submit() before kick_tx() so that the kernel
consumer sees the updated state of Tx ring. Otherwise, Tx packets are
stuck in the ring until the next call to af_xdp_tx_zc().
Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
Cc: stable@dpdk.org
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
After commit "d68249f88266", driver allocates ring groups in
bnxt_alloc_hwrm_rx_ring(). But during port start, driver invokes
bnxt_alloc_hwrm_rx_ring() followed by bnxt_alloc_all_hwrm_ring_grps().
This will cause the FW command failure in bnxt_alloc_all_hwrm_ring_grps()
To fix this, just don't create the ring group if it is already created.
Fixes: 9b63c6fd70e3 ("net/bnxt: support Rx/Tx queue start/stop")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Added an invalid fw_grp_id check inside bnxt_hwrm_ring_grp_free().
This will prevent invalid fw_grp_id to be passed to the FW which can
result in an error.
This fixes the following failure in the "port stop" -> "port start"
sequence:
bnxt_hwrm_ring_grp_free(): error 2:0:00000000:0204
bnxt_hwrm_ring_grp_free(): error 2:0:00000000:0204
Fixes: 9b63c6fd70e3 ("net/bnxt: support Rx/Tx queue start/stop")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
VLAN offload capability may be disabled in the FW. The driver
should not attempt to override or utilize this feature in such
scenarios since it will not work as expected.
Fixes: 0a6d2a720078 ("net/bnxt: get device infos")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In the scalar Rx path, for the VLAN packet, TCI is not saved in
the "mbuf->vlan_tci", however the STRIPPED offload flag is set
along with PKT_RX_VLAN flag.
Fixes: c1b33d40315f ("net/bnxt: use table based mbuf flags handling")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In the implementation of the VF driver ixgbevf_update_stats to obtain
statistics, the multicast count hw_stats->vfmprc has been obtained,
but it is not cleared in the corresponding ixgbevf_dev_stats_reset
interface.
Fixes: abf7275bbaa2 ("ixgbe: move to drivers/net/")
Cc: stable@dpdk.org
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
The len variable, used in the computation of max_pkt_len could
overflow, if used to store the result of the following computation:
rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS
Since, we could define the mbuf size to have a large value (i.e 13312),
and IAVF_MAX_CHAINED_RX_BUFFERS is defined as 5, the computation
mentioned above could potentially result in a value which might be
bigger than MAX_USHORT.
The result will be that Jumbo Frames will not work properly
Fixes: 69dd4c3d0898 ("net/avf: enable queue and device")
Cc: stable@dpdk.org
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When DCF configures rx_queues, it may cause the pointer of
rx_queues to go out of bounds.
This patch expands the scope of the judgment condition to
fix this issue.
Fixes: 4b0d391f0eab ("net/ice: add queue config in DCF")
Cc: stable@dpdk.org
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Default case of the switch statement causes deadlock because it returns
without unlocking the 'flow_ops_lock' lock. Fixing it.
Fixes: 0d6ef740e411 ("net/ice: support flow ops thread safe")
Cc: stable@dpdk.org
Signed-off-by: Yu Wenjun <yuwenjun0x@163.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>