Currently, the counter pool needs 512 ext-counter memory for no batch
counters, it's allocated separately by once, behind the 512
basic-counter memory. This is not easy to get ext-counter pointer by
corresponding basic-counter pointer. This is also no easy for expanding
some other potential additional type of counter memory.
So, need allocate every one of ext-counter and basic-counter together,
as a single piece of memory. It's will be same for further additional
type of counter memory. In this case, one piece of memory contains all
type of memory for one counter, it's easy to get each type memory by
using offsetting.
Signed-off-by: Dong Zhou <dongz@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
PMD create some default control rules with RSS action
if it's not isolated mode.
However whether default control rules need to do RSS or not should be
controlled by device configuration, the mq_mode of rxmode configuration
in specific.
In another word, only when mq_mode is configured with ETH_MQ_RX_RSS_FLAG
set, then RSS is needed for default rules.
Fixes: c64ccc0eca ("mlx5: fix overwritten RSS configuration")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The ULP mark manager originally assumed that zero was an invalid
mark and used it for invalidation and deletion. The mark manager
now supports adding zero as a mark, flags for validity and type,
and adds explicit bounds checking instead of relying on mask.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The current mark handling uses the meta data field of the rxcmp as the
first level check for determining gfid vs lfid. When the meta data is
zero due to only the lowest 16bits of the gfid being set, the cfa code
is incorrectly interpreted as being an lfid. Changing code to look at
meta fmt instead of the meta data directly for the determination.
Fixes: b87abb2e55 ("net/bnxt: support marking packet")
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Due to an errata red algo needs to be set to discard instead of stall
for 96XX C0 silicon for two rate shaping. This workaround is being
already handled for newly created hierarchy but not for dynamic
shaper update cases. This patch hence applies the workaround
even when for shaper dynamic update.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
On detecting outer L4 checksum as bad, both outer and
inner checksums are marked as bad. No need to explicitly
check inner L4 checksum in this case.
Outer L4 UDP checksum error => PKT_RX_OUTER_L4_CKSUM_BAD
and PKT_RX_L4_CKSUM_BAD
Inner L4 UDP checksum error => PKT_RX_L4_CKSUM_BAD
Fixes: 41fe7a3a11 ("net/octeontx2: offload bad L2/L3/L4 UDP lengths detection")
Signed-off-by: Amit Gupta <agupta3@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
When octeontx_create() is cleaning up, it does not correctly set
the mac_addrs variable to NULL, which will lead to a double free.
Fixes: 9e399b88ce ("net/octeontx: fix memory leak of MAC address table")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Harman Kalra <hkalra@marvell.com>
Add support for get firmware version operation.
Get and dump multi boot image (MBI) version as part of get
firmware version string along with Management firmware (MFW) version.
Use qede_fw_version_get() for PMD info logs.
Signed-off-by: Yash Sharma <ysharma@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
A regular memcmp function was used to compare between two objects of
type `struct rte_pci_addr`.
Due to the alignment rules of compiler structure builders, some memory
is not initiated in the structure even though all the fields were
initiated.
Therefore, the comparison may fail even though the PCI addresses are
identical and to cause false failure in probe.
Use the dedicated API to compare 2 PCI addresses.
Fixes: 75dd0ae917 ("vdpa/mlx5: disable RoCE")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Tested-by: Noa Ezra <noae@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for avail
index in split ring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for used
index in split ring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Since the return value of the '.stats_reset' and '.xstats_reset'
callback function is int, when failing to issue command to firmware to
execute clear statistics, the relevant callback function should return
non-zero value.
Fixes: 8839c5e202 ("net/hns3: support device stats")
Cc: stable@dpdk.org
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Currently, based on hns3 VF device error may occur during initialization.
The root cause as below:
When the following formula is executed during initialization, the
private variable named hw->tqps_num has not been obtained from PF driver
through mailbox, further causes failure when mapping interrupt and
queues.
hw->num_msi = (num_msi > hw->tqps_num + 1) ? hw->tqps_num + 1 : num_msi;
We need to use hw->tqp_num after it is correctly assigned.
On the other hand, because the private variable named hw->num_msi, which
represents the number of MSI-x interrupt of hns3 PF/VF device, is used in
the '.get_reg' ops implementation function to dump all interrupt related
registers, it should be obtained from firmware directly and we'd better
not modify it in the driver.
Fixes: ef2e785c36 ("net/hns3: fix Tx interrupt when enabling Rx interrupt")
Fixes: 02a7b55657 ("net/hns3: support Rx interrupt")
Cc: stable@dpdk.org
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In current version, when upper level application calls the
rte_eth_dev_configure API function, if pvid config is not set of the
input parameter which struct type is rte_eth_conf, hns3 pmd driver also
sets the VLAN pvid related configuration to hardware, and this is not
reasonable. For example, As pvid is set to 100 by
rte_eth_dev_set_vlan_pvid, when pvid config is not set in rte_eth_conf,
rte_eth_dev_configure will tell driver to delete pvid 0, and that is
meaningless.
This patch fixes it to ensure that driver does not set VLAN pvid related
configuration to hardware when pvid config is not set in rte_eth_conf.
Fixes: 411d23b9ea ("net/hns3: support VLAN")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
The hns3 network engine is built-in multiple SoCs, such as kunpeng 920,
kunpeng 930, etc. The PCI revision id is 0x21 in kunpeng 920, and the PCI
revision id is 0x30 in kunpeng 930.
This patch gets PCI revision to identify different version of hardware
network engine.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Here adds some prints for return value when the relative function fails
and enter the exception branch.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
When upper level application calls the rte_eth_tx_burst API function to
send multiple packets at a time with burst mode based on hns3 network
engine, there are some abnormal conditions that cause the driver to fail
to operate the hardware to send packets correctly.
This patch adds some statistic counts for the abnormal errors of Tx data
path to the extend device statistics. The upper level application can
get them by calling the rte_eth_xstats_get API function.
Note: When using burst mode to call the rte_eth_tx_burst API function to
send multiple packets at a time. When the first abnormal error is
detected, add one to the relevant error statistics item, and then exit
the loop of sending multiple packets of the function. That is to say,
even if there are multiple packets in which abnormal errors may be
detected in the burst, the relevant error statistics in the driver will
only be increased by one.
The detail description of the Tx abnormal errors statistic items as
below:
- TX_OVER_LENGTH_PKT_CNT Total number of greater than
HNS3_MAX_FRAME_LEN the driver supported.
- TX_EXCEED_LIMITED_BD_PKT_CNT
Total number of exceeding the hardware limited bd which process a
packet needed bd numbers.
- TX_EXCEED_LIMITED_BD_PKT_REASSEMBLE_FAIL_CNT
Total number of exceeding the hardware limited bd fail which
process a packet needed bd numbers and reassemble fail.
- TX_UNSUPPORTED_TUNNEL_PKT_CNT
Total number of unsupported tunnel packet. The unsupported tunnel
type: vxlan_gpe, gtp, ipip and MPLSINUDP, MPLSINUDP is a packet
with MPLS-in-UDP RFC 7510 header.
- TX_QUEUE_FULL_CNT
Total count which the available bd numbers in current bd queue is
less than the bd numbers with the pkt process needed.
- TX_SHORT_PKT_PAD_FAIL_CNT
Total count which the packet length is less than minimum packet
size HNS3_MIN_PKT_SIZE and fail to be appended with 0.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
For sending request messages to data plane threads, the
caller invokes thread_msg_send_recv() function which never
returns null response. Thus, removed redundant check on
the returned response.
Coverity issue: 357717, 357772
Fixes: 70709c78fd ("net/softnic: add command to enable/disable pipeline")
Cc: stable@dpdk.org
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Make local MCDI helper functions static.
Fixes: f0bda0cd68 ("net/sfc/base: add MCDI wrappers for vPort and vSwitch in EVB")
Fixes: ea94d14dbe ("net/sfc/base: provide APIs to configure and reset vPort")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Refactor the code and make it easier to read. It's
useful for understanding the inflight APIs and how
packed ring works. Update the RST because the packed
ring patch has been merged to QEMU master and ring_packed
parameter changes to packed.
Signed-off-by: Jin Yu <jin.yu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The maximal supported header modifications number of a single modify
context on the root table cannot be queried from firmware directly.
It is a fixed value of 16 in the latest releases. In the validation
stage, PMD driver should ensure that no more than 16 header modify
actions exist in a single context.
In some old firmware releases, the supported value is 8. PMD driver
should try its best to create the flow. Firmware will return error
and refuse to create the flow if the actions number exceeds the
maximal value.
Fixes: 72a944dba1 ("net/mlx5: fix header modify action validation")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the indexed memory pool bitmap start address is not aligned
to cacheline size explicitly. The bitmap initialization requires the
address should be cacheline aligned. In that case, the initialization
maybe failed if the address is not cacheline aligned.
Add RTE_CACHE_LINE_ROUNDUP() to the trunk size calculation to make sure
the bitmap offset address will start with cacheline aligned.
Fixes: a3cf59f56c ("net/mlx5: add indexed memory pool")
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Tested-by: Lijian Zhang <lijian.zhang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
The assert that checks if there is a enough room for the
whole packet minus headroom data is written incorrectly.
The check should be negated in order to work properly.
Fixes: bd0d5930bf ("net/mlx5: enable MPRQ multi-stride operations")
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The assert in dynamic flow metadata handling is wrong after the
fix for the performance degradation. The assert meant to check
the metadata mask but was updated with the metadata offset instead.
Fix this assert and restore proper metadata mask checking.
Fixes: 6c55b622a9 ("net/mlx5: set dynamic flow metadata in Rx queues")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
If Tx zero copy enabled, gpa to hpa mapping table is updated one by
one. This will harm performance when guest memory backend using 2M
hugepages. Now utilize binary search to find the entry in mapping
table, meanwhile set the threshold to 256 entries for linear search.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In server mode, virtio-user inits under the assumption that vhost-user
supports a list of features. However, this could be problematic when
in_order feature is negotiated but not supported by vhost-user when
enables dequeue_zero_copy later.
Add handling when vhost-user enables dequeue_zero_copy as client.
Fixes: 64ab701c3d ("vhost: add vhost-user client mode")
Cc: stable@dpdk.org
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yinan Wang <yinan.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Rewrite vectorized path selection logic. Default setting comes from
vectorized devarg, then checks each criteria.
Packed ring vectorized path need:
AVX512F and required extensions are supported by compiler and host
VERSION_1 and IN_ORDER features are negotiated
mergeable feature is not negotiated
LRO offloading is disabled
Split ring vectorized rx path need:
mergeable and IN_ORDER features are not negotiated
LRO, chksum and vlan strip offloadings are disabled
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Optimize packed ring Tx path like Rx path. Split Tx path into batch and
single Tx functions. Batch function is further optimized by AVX512
instructions.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Optimize packed ring Rx path with SIMD instructions. Solution of
optimization is pretty like vhost, is that split path into batch and
single functions. Batch function is further optimized by AVX512
instructions.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Move offload, xmit cleanup and packed xmit enqueue function to header
file. These functions will be reused by packed ring vectorized path.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add new devarg for virtio user device vectorized path selection.
By default vectorized path is disabled.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Previously, virtio split ring vectorized path was enabled by default.
This is not suitable for everyone because that path does not follow
virtio spec. Add new devarg for virtio vectorized path selection. By
default vectorized path is disabled.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Ring initialization is different when inorder feature negotiated. This
action should dependent on negotiated feature bits.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Introduce free threshold setting in Rx queue, its default value is 32.
Limit the threshold size to multiple of four as only vectorized packed
Rx function will utilize it. Virtio driver will rearm Rx queue when
more than rx_free_thresh descs were dequeued.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The virtq configurations may be changed when it moves from disabled
state to enabled state.
Listen to the state callback even if the device is not configured.
Recreate the virtq when it moves from disabled state to enabled state
and when the device is configured.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In live migration, before logging the virtq, the driver queries the
virtq indexes after moving it to suspend mode.
Separate this method to new function mlx5_vdpa_virtq_stop as a
preparation for reusing.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
As a preparation to listen the virtqs status before the device is
configured, manage the virtqs structures in array instead of list.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The rarp packet broadcast flag is synchronized with rte_atomic_XX APIs
which is a full barrier, DMB, on aarch64. This patch optimized it with
c11 atomic one-way barrier.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In process_slave_message_reply(), there is a
possibility that receiving a peer close
message instead of a real message response.
This patch targeting to handle the peer close
scenario and report the correct error message.
Fixes: a277c71598 ("vhost: refactor code structure")
Cc: stable@dpdk.org
Signed-off-by: Roland Qi <roland.qi@ucloud.cn>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The goal is to make the table more readable.
Mark as partially supported features that are supported in
the generic Virtio driver but not in the vectorized ones.
Suggested-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Currently, while creating the flow with meter, meter id is saved to the
rte flow. While destroying the flow, the meter object will be found by
the meter id, so the meter object will be released accordingly. But as
the meter id is configured by user, while the meter id is set to 0, it
doesn't make any sense to flow destroy since 0 means flow doesn't have
meter. The meter object with id 0 will be leaked.
As meter object is allocated from indexed memory, and the index starts
from 1, save the internal generated index instead of user defined meter
id will never meet the issue as above.
This patch saves meter index instead of meter id in rte flow.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Gcc 8.3.0 (Debian 10) complains about uninitialized variable.
[474/2122] Compiling C object
'drivers/a715181@@tmp_rte_common_mlx5@sta/common_mlx5_mlx5_nl.c.o'.
In file included from ../drivers/common/mlx5/mlx5_nl.h:12,
from ../drivers/common/mlx5/mlx5_nl.c:23:
../drivers/common/mlx5/mlx5_nl.c: In function ‘mlx5_nl_enable_roce_get’:
../drivers/common/mlx5/mlx5_common.h:68:2: warning: ‘cur_en’ may be used
uninitialized in this function [-Wmaybe-uninitialized]
rte_log(RTE_LOG_ ## level, \
^~~~~~~
../drivers/common/mlx5/mlx5_nl.c:1560:6: note: ‘cur_en’ was declared here
int cur_en;
^~~~~~
The compiler is correct, this variable would only be set if kernel
netlink response message contains the DEVLINK parameter that flags if
ROCE is enabled.
Fixes: fa69eaef5f ("common/mlx5: support ROCE disable through Netlink")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The asserts makes sure that 'i' doesn't exceed the expected value.
This to prevent an out of bound access to dbr_bitmap.
The current location of the assert protects the assignment of
dbr_bitmap, but not the access to it.
Moved the assert to the correct place, to protect both cases.
Also, used an existing define for the assert.
Fixes: 21cae8580f ("net/mlx5: allocate door-bells via DevX")
Cc: stable@dpdk.org
Signed-off-by: Asaf Penso <asafp@mellanox.com>
Reviewed-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The output flow error parameter is used to indicate the detailed
reason of the failure when calling a rte_flow_* interface. Even
though sometimes the application will not check it or use it, the PMD
must fill it in the failure branch before returning. Or else, some
dirty value in the stack, heap will be accessed as a pointer and then
cause a crash.
In this case, when a port is stopped, it is not allowed to insert a
flow from application. The detailed error information should be
filled. If the application needs to check the detailed error reason,
it will get the information but not result in any crash.
Fixes: 40b9e7f65f ("net/mlx5: check device status before creating flow")
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
After inserting an offload flow, the software flag information will
be updated based on the flow. When receiving a packet on this queue,
the hardware packet type bits and the software flag will be used
together to get the inner packet and tunnel header type (if any) from
the global packet type table.
When destroying a flow, the corresponding Rx queue flag needs to be
updated. All flags should be cleared when closing a device because
all control flows and application flows are invalid anymore.
Such behavior is missed when implementing the non-cached mode.
Fixes: 8db7e3b698 ("net/mlx5: change operations for non-cached flows")
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>