When flow is inserted with meta match item it requires a certain
register support.
As part of the flow validation of such flows, the validation
function is missing a check that the mlx5 driver is not in
legacy mode in terms of extended meta data support
(MLX5_XMETA_MODE_LEGACY flag).
If the driver is in legacy mode it will cause downstream
function that allocates needed register for meta data.
The fix checks explicitly the conditions for support of
meta data in FDB mode. If the conditions are not met
an error message will be issued.
Fixes: 9bf26e1318 ("ethdev: move egress metadata to dynamic field")
Cc: stable@dpdk.org
Signed-off-by: Shy Shyman <shys@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch add the total input set byte number check,
as there is a hardware requirement for the total number
of 32 byte.
Fixes: 47d460d632 ("net/ice: rework switch filter")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.
Fixes: 47d460d632 ("net/ice: rework switch filter")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.
Fixes: 6bc7628c5e ("net/ice: change default tunnel type")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch add more support for switch parser of PPPoE packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
PPPoE payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter PPPoE related rule.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Fix the build error in DCF when CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC
compilation option is enabled. Legacy 16 byte Rx descriptor is not
supported in DCF. If it is enabled, DCF configuration stops.
Fixes: 929eceefab ("net/ice: add queue start and stop for DCF")
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When the filter rule needs to use the TCAM method, driver
enables the TCAM filter switch, otherwise disables it, which
can improve the performance of microcode in FDIR scenarios that
does not use TCAM method.
Fixes: 1fe89aa37f ("net/hinic: add flow director filter")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
When setting promiscuous or allmulticast mode, increase
multi-thread resource protection, because the patch
"net/bonding: prefer allmulti to promiscuous for LACP"
adds trying to use allmulti when adding a slave, and
EVS bond driver also sets promisc with another thread,
which may lead to thread reentry and cause failure to
set promiscuous mode.
Fixes: cb7b6606eb ("net/hinic: add RSS stats and promiscuous ops")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
Add output buffer and out size info for some cmds that use management
sync channel, which can improve dfx capability when sent msg failed.
Fixes: 7fcd6b05b9 ("net/hinic/base: support cmdq mechanism")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Because of the hardware constraints, hns3 network engine doesn't support
sending packets with more than eight fragments. And hns3 pmd driver
tries to reassemble these kind of packets to meet hardware requirements.
Currently, there are two problems:
1) when the input buffer_len * 8 < pkt_len, the packets are impossible
to be reassembled into 8 Buffer Descriptors. In this case, the
packets will be passed to hardware, which eventually causes a
hardware reset.
2) The meta data in origin packets which are required to fill into the
descriptor haven't been copied into the reassembled pkts.
This patch adds a check for 1) to ensure such packets will be dropped by
driver and copies useful meta data from the origin packets to the
reassembled packets.
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Currently, rx_buf_size of hns3 PMD driver is fixed on, and it's value
depends on the firmware which will decrease the flexibility of PMD.
The receive side mbufs was allocated from the mempool given by upper
application calling rte_eth_rx_queue_setup API function. So the memory
chunk used for net device DMA is depend on the data room size of the
objects in this mempool. Hns3 PMD driver should set the rx_buf_len
smaller than the data room size of mempool and our hardware only support
the following four specifications: 512, 1024, 2148 and 4096.
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
According to the user manual of Kunpeng920 SoC, the max allowed number
of segments per whole packet is 63 and the max number of segments per
packet is 8 in datapath.
This patch reports the Two segment parameters of Tx descriptor
limitations to DPDK framework.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
When the last driver exits abnormally, for example, it is killed by
'kill -9', it may be too late to clear the configuration and cause the
configuration to remain. Therefore, to ensure that the hardware
environment is clean during initialization, the PF driver actively clear
the hardware environment during initialization, including PF and
corresponding VFs' vlan, mac, flow table configurations, etc.
Fixes: d51867db65 ("net/hns3: add initialization")
Cc: stable@dpdk.org
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
This patch optimizes the code to get device capability in primary
process, and moves the code of getting PCI revision id in order to avoid
evaluating the private hw->revision of shared PMD-specific private data
in slave process.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
This patch adds support setting VF PVID by hns3 PF kernel ethdev driver
on the host by "ip link set <eth num> vf <vf id> vlan <vlan tag>"
command.
Because of the hardware constraints, the striped VLAN tag will always in
Rx descriptors which should has been dropped when PVID is enabled and
the PVID will overwrite the outer VLAN tag in Tx descriptor. So, hns3
PMD driver need to change the processing of VLAN tags in the process of
Tx and Rx according to whether PVID is enabled.
1) If the hns3 PF kernel ethdev driver sets the PVID for VF device
before the initialization of the related VF device, hns3 VF PMD
driver should get the PVID state from PF driver through mailbox and
update the related state in txq and rxq maintained by hns3 VF driver
to change the process of Tx and Rx.
2) If the hns3 PF kernel ethdev driver sets the PVID for VF device after
initialization of the related VF device, the PF driver will notify VF
driver to update the PVID state. The VF driver will update the PVID
configuration state immediately to ensure that the VLAN process in Tx
and Rx is correct. But in the window period of this state transition,
packets loss or packets with wrong VLAN may occur.
3) Due to hardware limitations, we only support two-layer VLAN hardware
offload in Tx direction based on hns3 network engine, so when PVID is
enabled, QinQ insert is no longer supported. And when PVID is
enabled, in the following two cases:
i) packets with more than two VLAN tags.
ii) packets with one VLAN tag while the hardware VLAN insert is
enabled.
The packets will be regarded as abnormal packets and discarded by
hardware in Tx direction. For debugging purposes, a validation check
for these types of packets is added to the '.tx_pkt_prepare' ops
implementation function named hns3_prep_pkts to inform users that
these packets will be discarded.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Currently, hns3 PMD driver needs know the PVID configuration state and
do different processing in the 'rx_pkt_burst' ops implementation
function.
This patch adds a member to struct hns3_rx_queue/hns3_tx_queue of the
driver to indicate the PVID configuration status, so it isn't need
to access other data structure in the 'rx_pkt_burst' ops implementation,
to avoid performance loss because of reducing cache miss.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
This patch adds support of LRO offload for hns3 PMD driver.
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
This patch adds support of symmetric algorithm of RSS.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
This fix has been added to address memory leak issues resulting from
triggering a sudden driver reset which does not allow us to follow our
normal removal flows for SW XLT entries for advanced features.
- Adding call to destroy flow profile locks when clearing SW XLT tables.
- Extraction sequence entries were not correctly cleared previously
which could cause ownership conflicts for repeated reset-replay calls.
Fixes: 969890d505 ("net/ice/base: enable clearing of HW tables")
Cc: stable@dpdk.org
Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The ice_parse_caps function is used to convert the capability block data
coming from firmware into a structured format used by other parts of the
code.
The current implementation directly updates the hw->func_caps and
hw->dev_caps structures. It is directly called from within
ice_aq_discover_caps. This causes the discover_caps function to have the
side effect of modifying the hw capability structures, which is not
intuitive.
Split this function into ice_parse_dev_caps and ice_parse_func_caps.
These functions will take a pointer to the dev_caps and func_caps
respectively. Also create an ice_parse_common_caps for sharing the
capability logic that is common to device and function.
Doing so enables a future refactor to allow reading and parsing
capabilities into a local caps structure instead of modifying the
members of the hw structure directly.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The current implementation for reading device and function capabilities
from firmware, ice_aq_discover_caps, has potentially undesirable
side effects.
ice_aq_discover_caps calls ice_parse_caps, resulting in overwriting the
capabilities stored in the hw structure. This is ok during
initialization, but means that code which wants to read the capabilities
after initialization cannot use ice_aq_discover_caps without being
careful of the side effects.
Factor out the AQ command logic into a new ice_aq_list_caps function.
This will be used by the ice_aq_discover_caps function.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Remove unused macro and function.
Declare no external referenced function as static.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
To implement a FW workaround for LFC, a set_local_mib must be
performed after every link up event. For systems that do not
have DCB configured, we need to move the function
ice_aq_set_lldp_mib() from the DCB specific ice_dcb.c to
ice_common.c so that the driver always has access to this AQ
command.
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Cleanup code style issue reported by kernel checkpatch.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
An IP header combined with GTP-U header should be regarded as
inner layer for RSS, otherwise it mess the field vector between
an IPv4 rule and IPv6 rule e.g:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpu / \
gtpu_psc / ipv4 / udp / end actions rss types ipv4-udp end key_len \
0 queues end / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpu / \
gtpu_psc / ipv6 / udp / end actions rss types ipv6-udp end key_len \
0 queues end / end
Fixes: b7d34ccc47 ("net/ice/base: packet encapsulation for RSS")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Support redirect a switch rule if its action is to VSI list.
Fixes: 397b4b3c50 ("net/ice: enable flow redirect on switch")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch introduces the OS specific functions, for flow actions
create and destroy operations.
In existing implementation, the functions to create flow actions
return a pointer to the created action object.
The new OS specific functions to create flow actions return 0 on
success, and (-1) on failure.
On success, a pointer to the created action object is returned
using an additional parameter.
On failure errno is set.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch introduces the OS specific functions, for flow create
and flow destroy operations.
In existing implementation, the functions to create objects
(flow/table/matcher) return a pointer to the created object.
The functions to destroy objects return 0 on success and errno on
failure.
The new OS specific functions to create objects return 0 on success,
and (-1) on failure.
On success, a pointer to the created object is returned using an
additional parameter.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
In current implementation the flow type (DV/Verbs) is selected
using dedicated function flow_get_drv_type().
This patch adds OS specific function mlx5_flow_os_get_type(), to
allow OS specific flow type selection.
The new function is called by flow_get_drv_type(), and if it returns a
valid value (DV/Verbs) no more logic is required.
Otherwise the existing logic is executed.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch introduces the first OS specific utility functions,
for use by flow engine in different OS implementation.
The first utility functions are:
bool mlx5_flow_os_item_supported(item)
bool mlx5_flow_os_action_supported(action)
They are implemented to check OS specific support for different
item types and action types.
New header file is added:
drivers/net/mlx5/linux/mlx5_flow_os.h
This file contains the utility functions mentioned above for Linux OS.
At this stage they are implemented as static inline, for efficiency,
and always return true.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
As part of the effort to support DPDK on Windows and other OS,
rename 'verbs_action' to the generic name 'action'.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
As part of the effort to support DPDK on Windows and other OS,
rename from IB related name to generic name.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch moved the RSS initialization from dev start to
dev configure, to fix the issue that RSS redirection table
can not be kept after restarting port.
Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Cc: stable@dpdk.org
Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The question around getting rid of the assignments seems lived
long enough, if they are not needed until now, we can drop them.
Fixes: 39bca0ed99 ("ixgbe: DCB in base driver")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This is observed with experimental gcc 11, although the older gcc
versions don't complain about it, issue seems a valid one.
gcc version 11.0.0 20200621 (experimental) (GCC)
Build error
.../drivers/net/iavf/iavf_ethdev.c: In function ‘iavf_dev_link_update’:
.../drivers/net/iavf/iavf_ethdev.c:641:6:
error: ‘new_link’ is used uninitialized [-Werror=uninitialized]
641 | if (rte_atomic64_cmpset((uint64_t *)&dev->data->dev_link,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
642 | *(uint64_t *)&dev->data->dev_link,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
643 | *(uint64_t *)&new_link) == 0)
| ~~~~~~~~~~~~~~~~~~~~~~~
.../drivers/net/iavf/iavf_ethdev.c:596:22:
note: ‘new_link’ declared here
596 | struct rte_eth_link new_link;
| ^~~~~~~~
cc1: all warnings being treated as error
All fields of the 'new_link' struct is already set in function, so the
'uninitialized' warning is hard to get. This is because the combination
of aligning and bitfield usage of the struct
The definition of the struct is:
struct rte_eth_link {
uint32_t link_speed; /**< ETH_SPEED_NUM_ */
uint16_t link_duplex : 1; /**< ETH_LINK_[HALF/FULL]_DUPLEX */
uint16_t link_autoneg : 1; /**< ETH_LINK_[AUTONEG/FIXED] */
uint16_t link_status : 1; /**< ETH_LINK_[DOWN/UP] */
} __rte_aligned(8); /**< aligned for atomic64 read/write */
Overall the size of the 'struct rte_eth_link' is 64 bits, but function
only sets the 35 bits of it, because only 3 bits of 16 bits variable are
used.
When the struct cast to 'uint64_t' because of the 'rte_atomic64_cmpset'
the upper 29 bits are used without initialization.
To fix the uninitialized usage, memset the variable 'new_link' before
using it.
Fixes: 48de41ca11 ("net/avf: enable link status update")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Kernel driver reads EEPROM data from flash but DPDK reads from
shadow ram. This patch fixes the issue by changing method to get
EEPROM data from flash.
Fixes: 68a1ab82ad ("net/ice: speed up to retrieve EEPROM")
Cc: stable@dpdk.org
Signed-off-by: Shougang Wang <shougangx.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For ixgbe x553(IXGBE_DEV_ID_X550EM_A_1G_T) it support 10M
link speed, so add the support link speed info for 10Mb/s.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Add support to allow the original VF actions in DCF.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Now that all libraries have a single version, we can drop the empty
stable blocks that had been added when moving symbols from stable to
internal ABI.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Introduce the RTE_LOG_REGISTER macro to avoid the code duplication
in the logtype registration process.
It is a wrapper macro for declaring the logtype, registering it and
setting its level in the constructor context.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Some compilers raise an error when declaring a variable
in the middle of a function. This is a C99 allowance.
Even if DPDK switches globally to C99 or C11 standard,
the coding rules are for declarations at the beginning
of a block:
http://doc.dpdk.org/guides/contributing/coding_style.html#local-variables
This coding style is enforced by adding a check of
the common patterns like "for (int i;"
The occurrences of the checked pattern are fixed:
'for *(\(char\|u\?int\|unsigned\|s\?size_t\)'
In the file dpaa2_sparser.c, the fix is to remove the unused macros.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
This reverts commit 489e0b5b33.
The ring used in copy mode should be multi-producer multi-consumer
because enqueues and dequeues to the ring are performed on both the rx
and tx paths, which can be running on different threads.
Fixes: 489e0b5b33 ("net/af_xdp: use single producer/consumer ring")
Cc: stable@dpdk.org
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
This commit makes some changes to the AF_XDP PMD in an effort to improve
its packet loss characteristics.
1. In the case of failed transmission due to inability to reserve a tx
descriptor, the PMD now pulls from the completion ring, issues a
syscall in which the kernel attempts to complete outstanding tx
operations, then tries to reserve the tx descriptor again. Prior to
this we dropped the packet after the syscall and didn't try to
re-reserve.
2. During completion ring cleanup, always pull as many entries as
possible from the ring as opposed to the batch size or just how many
packets we're going to attempt to send. Keeping the completion ring
emptier should reduce failed transmissions in the kernel, as the
kernel requires space in the completion ring to successfully tx.
3. Size the fill ring as twice the receive ring size which may help
reduce allocation failures in the driver.
4. Emulate a tx_free_thresh - when the number of available entries in
the completion ring rises above this, we pull from it. The threshold
is set to 1k entries.
With these changes, a benchmark which measured the packet rate at which
0.01% packet loss could be reached improved from ~0.1G to ~3Gbps.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Virtio_hw *hw has been pointed to vq->hw, it is better to use
hw instead of vq->hw in later code.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The mlx5_check_vec_rx_support function in the mlx5_rxtx_vec.c file
passes the RX queues array in the loop. Similarly, the mlx5_mprq_enabled
function in the mlx5_rxq.c file passes the RX queues array in the loop.
In both cases, the iterator of the loop is called i and the variable
representing the array size is called rxqs_n.
The i variable is of UINT16_T type while the rxqs_n variable is of
unsigned int type. The size of the rxqs_n variable is much larger than
the number of iterations allowed by the i type, theoretically there may
be a situation where the value of the rxqs_n will be greater than can be
represented by 16 bits and the loop will never end.
Change the type of i to UINT32_T.
Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Fixes: 6cb559d67b ("net/mlx5: add vectorized Rx/Tx burst for x86")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>