The container in the action configuration is U32,
but the ID is U16, and overflow check is missing.
Fixes: 1fb65e4dae ("net/sfc: support flow action port ID in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Take into account VLAN presence fields in items ETH and VLAN.
Provided that the item ETH does not match on the EtherType,
the pattern behaviour will be as follows:
- ETH (mask->has_vlan = 0) | IPv4 = match both tagged and untagged;
- ETH (mask->has_vlan = 1) | IPv4 = match as per spec->has_vlan;
- ETH (mask->has_vlan = 0) | VLAN | IPv4 = match only tagged.
Similar logic applies to double tagging.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
This patch adds default RSS support for IPv4 and IPv6 fragment packet.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The rte_eth_devices array is not in share memory, it should not be
referenced by i40e_adapter which is shared by primary and secondary.
Any process set i40e_adapter->eth_dev will corrupt another process's
context.
The patch removed the field "eth_dev" from i40e_adapter.
Now, when the data paths try to access the rte_eth_dev_data instance,
they should replace adapter->eth_dev->data with adapter->pf.dev_data.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For i40e vector Tx path, if tx_offload is set as FAST_FREE_MBUF mode,
no mbuf fast free operations are executed. To fix this, add mbuf fast
free mode for vector Tx path.
Furthermore, for i40e vector Tx path, if implement FAST_FREE_MBUF mode,
it means per-queue all mbufs come from the same mempool and have
refcnt = 1. Thus we can use bulk free of the buffers when mbuf fast free
mode is enabled.
For vector path in arm platform:
In n1sdp, performance is improved by 18.4%;
In thunderx2, performance is improved by 23%.
For vector path in x86 platform:
No performance changes.
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means
per-queue all mbufs come from the same mempool and have refcnt = 1.
Thus we can use bulk free of the buffers when mbuf fast free mode is
enabled.
Following are the test results with this patch:
MRR L3FWD Test:
two ports & bi-directional flows & one core
RX API: i40e_recv_pkts_bulk_alloc
TX API: i40e_xmit_pkts_simple
ring_descs_size = 1024;
Ring_I40E_TX_MAX_FREE_SZ = 64;
tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH = 32;
tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH = 32;
For scalar path in arm platform with default 'tx_rs_thresh':
In n1sdp, performance is improved by 7.9%;
In thunderx2, performance is improved by 7.6%.
For scalar path in x86 platform with default 'tx_rs_thresh':
performance is improved by 4.7%.
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The HW debug mask is always zero, so user can't enable the related debug
function like ICE_DBG_XXX etc, add the devarg 'hw_debug_mask' to set the
debug mask log output at runtime.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Currently, there'll be conflict error when running
the following commands:
1. flow create 0 ingress
pattern eth / ipv4 / udp src is 32 / end
actions queue index 2 / end
2. flow destroy 0 rule 0
3. flow create 0 ingress
pattern eth / ipv4 / udp dst is 32 / end
actions queue index 2 / end
This patch fixes the input set conflict issue.
Fixes: 42044b69c6 ("net/i40e: support input set selection for FDIR")
Fixes: 4a072ad434 ("net/i40e: fix flow director config after flow validate")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Lingli Chen <linglix.chen@intel.com>
Add check in the Tx packet preparation function, to guarantee that the
packet with specific user priority is distributed to the correct Tx
queue according to the configured Tx queue TC mapping.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch added the support for VF to config the ETS-based Tx QoS,
including querying current QoS configuration from PF and config queue TC
mapping. PF QoS is configured in advance and the queried info is
provided to the user for future usage. VF queues are mapped to different
TCs in PF through virtchnl.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch supports the ETS-based QoS configuration. It enables the DCF
to configure bandwidth limits for each VF VSI of different TCs. A
hierarchy scheduler tree is built with port, TC and VSI nodes.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When link status changes, DCF will receive virtchnl PF event message.
Add support to handle this event, change link status and update link
info.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
In the adminq command query port ETS function, the root node teid is
needed. However, for DCF, the root node is not initialized, which will
cause error when we refer to the variable. In this patch, we will check
whether the root node is available or not first.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a specific path for RX AVX2.
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Add a specific path for TX AVX2.
In this path, support the HW offload features, like,
checksum insertion, VLAN insertion.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Since each PF does not share the same structure space, the first
mask value should start at 0 instead of hw->pf_id * per_pf to avoid
address overflow. Otherwise, address space will overlap when
masks.first + masks.count > ICE_PROF_MASK_COUNT, and it may lead to
unexpected variable assignment, which causes segmentation fault.
Fixes: 9467486f17 ("net/ice/base: enable masking for RSS and FD field vectors")
Cc: stable@dpdk.org
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
i40e can support following rss hash function types: default/toeplitz,
symmetric toeplitz, and simple_xor. However, when filter engine parses
pattern action, it only supports symmetric toeplitz & default.
Add simple_xor and toeplitz hash functions support when parsing pattern
action.
Fixes: ef4c16fd91 ("net/i40e: refactor RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The len variable, used in the computation of max_pkt_len could overflow,
if used to store the result of the following computation:
ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len
Since, we could define the mbuf size to have a large value (i.e 13312),
and ICE_SUPPORT_CHAIN_NUM is defined as 5, the computation mentioned
above could potentially result in a value which might be bigger than
MAX_USHORT.
The result will be that Jumbo Frames will not work properly
Fixes: 1b009275e2 ("net/ice: add Rx queue init in DCF")
Cc: stable@dpdk.org
Signed-off-by: Tudor Cornea <tudor.cornea@keysight.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The flag use_avx2 and use_avx512 are defined as local variables, they
will not be aware by the secondary process, then wrong data path is
selected. Fix the issue by moving them into struct i40e_adapter.
Fixes: 6ada10deac ("net/i40e: remove devarg use-latest-supported-vec")
Fixes: e6a6a13891 ("net/i40e: add AVX512 vector path")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Support AVF RSS for inner header of GRE tunnel packet. It supports
RSS based on fields inner IP src + dst address and TCP/UDP src + dst
port.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds IPV4/IPV6 SRC/DST input set for ESP/NAT_T_ESP to
support outer IP match.
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds IPV4/IPV6 SRC/DST input set for ESP to support
outer IP match.
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
mz is known to be NULL. Do not use it to print a memzone name.
Fixes: 242e18c056 ("net/octeontx_ep: add Rx queue setup and release")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Enable PTP offload in vector Tx burst function. Since, we can
no-longer use a single LMT line for burst of 4, split the LMT
into two and transmit twice.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Enable PTP offload in vector Rx burst function, use vector path
for processing mbufs and finally switch to scalar when extracting
timestamp.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Adding a new callback for reading the link status. PF can read it's
link status and can forward the same to VF once it comes up.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Currently LSO formats setup initially are expected to be
compile time constants and start from 0.
Change the logic in slow and fast path so that LSO format indexes
are only determined runtime.
Fixes: 3b635472a9 ("net/octeontx2: support TSO offload")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
CN96xx and CN98xx have 4096 and 16384 MCAM entries respectively.
Aligning the code with the same numbers.
Fixes: 092b383418 ("net/octeontx2: add flow init and fini")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for DEV_TX_OFFLOAD_MBUF_FAST_FREE for inline IPsec path
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Patch adds multicast filter support for cn9k and cn10k platforms.
CGX DMAC filter table(32 entries) is divided among all LMACs
connected to it i.e. if CGX has 4 LMACs then each LMAC can have
up to 8 filters. If CGX has 1 LMAC then it can have up to 32
filters.
Above mentioned filter table is used to install unicast and multicast
DMAC address filters. Unicast filters are installed via
rte_eth_dev_mac_addr_add API while multicast filters are installed
via rte_eth_dev_set_mc_addr_list API.
So in total, supported MAC filters are equal to DMAC filters plus
mcast filters.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Once PTP status is changed at H/W i.e. enable/disable then
it is propagated to user via registered callback.
So corresponding callback is registered to get PTP status.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Adding initial version of rte_flow support for cnxk family device.
Supported rte_flow ops are flow_validate, flow_create, flow_destroy,
flow_flush, flow_query, flow_isolate.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Adding support to configure NPC on device initialization. This involves
reading the MKEX and initializing the necessary data.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Once mbufs are transmitted, mbufs are freed by H/W. No mbufs are
accumalated as a pending mbuf.
Hence operation is NOP for cnxk platform.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
CN9K and CN10K support platform specific mempool ops.
This patch implements API to validate whether given mempool
ops is supported or not.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Application may choose to enable/disable interrupts on Rx queues
so that application can select its processing if no packets are
available on queues for a longer period.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Add device start and stop operation callbacks for
CN9K and CN10K. Device stop is common for both platforms
while device start as some platform dependent portion where
the platform specific offload flags are recomputed and
the right Rx/Tx burst function is chosen.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add support for packet type lookup on Rx to translate HW
specific types to RTE_PTYPE_* defines
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
aDD tx queue setup and release for CN9K and CN10K.
Release is common while setup is platform dependent due
to differences in fast path Tx queue structures.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add Rx queue setup and release op for CN9K and CN10K
SoC. Release is completely common while setup is platform
dependent due to fast path Rx queue structure variation.
Fastpath is platform dependent partly due to core cacheline
size difference.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device configuration op for CN9K and CN10K. Most of the
device configuration is common between two platforms except for
some supported offloads.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add various devargs parsing command line arguments
parsing functions supported by CN9K and CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add platform specific probe and remove callbacks for CN9K
and CN10K which use common probe and remove functions.
Register ethdev driver for CN9K and CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add build infrastructure and common probe and remove for cnxk driver
which is used by both CN10K and CN9K SoC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add common devargs key definition for "bus", "class" and "driver".
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Metadata were stored in the CPU order (little-endian format on x86),
while all the packet header fields are stored in the network order.
That caused wrong results whenever we tried to use metadata value
in the modify_field action: bytes were swapped as a result.
Convert the metadata value into big-endian format before storing it
in the Mellanox NIC to achieve consistent behaviour.
Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
MAC addresses are split into 2 parts inside Mellanox NIC:
bits 0-15 are separate from bits 16-47. That makes a copy
from another packet field tricky because any other field
is aligned to 32 bits, not 16. This causes unexpected
results when using the MODIFY_FIELD action with MAC addresses.
Track crossing MAC addresses boundary and arrange a proper
order for the MODIFY_FIELD action involving MAC addresses.
Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
A flow rule must not include multiple tunnel layers.
An attempt to create such a rule, for example:
testpmd> flow create .../ vxlan / eth / ipv4 proto is 4 / end <actions>
results in an unclear error.
In the current implementation there is a check for
multiple IPIP tunnels, but not for combination of IPIP
and a different kind of tunnel, such as VXLAN. The fix
is to enhance the above check to use MLX5_FLOW_LAYER_TUNNEL
that consists of all the tunnel masks. The error message
will be "multiple tunnel not supported".
Fixes: 5e33bebdd8 ("net/mlx5: support IP-in-IP tunnel")
Cc: stable@dpdk.org
Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The timestamp format was not configured correctly for the
receiving queues created via DevX calls. It caused non-UTC
timestamps in CQEs for real time configurations.
Fixes: d61381ad46 ("net/mlx5: support timestamp format")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The new kernels might add the switch_id attribute to the
Netlink replies and this caused the wrong recognition
of the E-Switch presence. The single uplink device was
erroneously recognized as master and it caused the
extending match for source vport index on all installed
flows, including the default ones, and adding extra hops
in the steering engine, that affected the maximal
throughput packet rate.
The extra check for the new device name format (it supposes
the new kernel) and the device is only one is added. If this
check succeeds the E-Switch presence is considered as wrongly
detected and overridden.
Fixes: 30a86157f6 ("net/mlx5: support PF representor")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When a counter is destroyed and used for aging action, the driver should
remove the counter object from the age-out list if it is there.
The counter memory of the list entry and of the counter shared
information is shared because, currently, shared counter cannot be used
for aging.
When the support for counter action in action handle API was added, the
counter shared information was reused and moved to be used also for
non-shared case. Wrongly, it is used for aging case too.
Remove the usage of shared information in case of aging.
Fixes: f3191849f2 ("net/mlx5: support flow count action handle")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
When an error appears in the policy creation,
the IDs mapping between the user policy ID to
the driver policy ID is skipped.
Wrongly, the driver tried to clean the mapping in
this case what caused an error.
Skip the clearance in this case.
Fixes: afb4aa4f12 ("net/mlx5: support meter policy operations")
Cc: stable@dpdk.org
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The modify field implementation in mlx5 driver has a check to
prevent a copy from a field to the same field. But the level
is not taken into account which prevents a copy from different
tags. Check the level and allow a copy from one tag to another.
Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Flow rule pattern may be implicitly expanded by the PMD if the rule
has RSS flow action. The expansion adds network headers to the
original pattern. The new pattern lists all network levels that
participate in the rule RSS action.
The patch fixes expanded pattern for cases when original pattern
included meta items like MARK, TAG, META.
Fixes: c7870bfe09 ("ethdev: move RSS expansion code to mlx5 driver")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
'dev_gen' is a variable to trigger all cores to flush their local caches
once the global MR cache has been rebuilt.
This is due to MR cache's R/W lock can maintain synchronization between
threads:
1. dev_gen and global cache updating ordering inside the lock protected
section does not matter. Because other threads cannot take the lock
until global cache has been updated. Thus, in out of order platform,
even if other agents firstly observe updated dev_gen but global does
not update, they also have to wait the lock. As a result, it is
unnecessary to add a wmb between global cache rebuilding and updating
the dev_gen to keep the memory store order.
2. Store-Release of unlock provides the implicit wmb at the level
visible by software. This makes 'rebuilding global cache' and 'updating
dev_gen' be observed before local_cache starts to be updated by other
agents. Thus, wmb after 'updating dev_gen' can be removed.
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
'dev_gen' is a variable to trigger all cores to flush their local caches
once the global MR cache has been rebuilt.
This is due to MR cache's R/W lock can maintain synchronization between
threads:
1. dev_gen and global cache updating ordering inside the lock protected
section does not matter. Because other threads cannot take the lock
until global cache has been updated. Thus, in out of order platform,
even if other agents firstly observe updated dev_gen but global does
not update, they still have to wait the lock. As a result, it is
unnecessary to add a wmb between global cache rebuilding and updating
the dev_gen to keep the memory store order.
2. Store-Release of unlock provides the implicit wmb at the level
visible by software. This makes 'rebuilding global cache' and 'updating
dev_gen' be observed before local_cache starts to be updated by other
agents. Thus, wmb after 'updating dev_gen' can be removed.
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch uses the new device config ops to get and set
the MAC address if supported.
If a valid MAC address is passed as devarg of the
Virtio-user PMD, the driver will try to store it in the
device config space. Otherwise the one provided in
the device config space will be used, if available.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch introduces two virtio-user callbacks to get
and set device's config, and implements it for vDPA
backends.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch is preliminary rework to add support for getting
and setting device's config space.
In order to get or set a device config such as its MAC address,
we need to know whether the device itself support the feature,
or if it is emulated by the frontend.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Restore the original code, where VHOST_SET_FEATURES is applied to
all vhostfds of the device.
Fixes: cc0151b34d ("net/virtio: add virtio-user features ops")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
IPv4 and IPv6 fragment ptypes are supposed to be separated from IP
other ptypes. New bitmaps for IP fragment ptypes were created, but the
IP fragment ptypes were not deleted from the previous non-frag bitmaps,
which will cause conflicts. This patch removes IP fragment ptypes from
the non-frag bitmaps.
Fixes: 8434528175 ("net/ice/base: support IP fragment RSS and FDIR")
Cc: stable@dpdk.org
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The new allocated mbuf should be updated to the SW
ring.
Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Fixes: b8b4c54ef9 ("net/iavf: support flexible Rx descriptor in normal path")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The original code use a heap pointer after it is freed.
Fixes: 460d167958 ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When creating FDIR rule and parsing the pattern, if IPv4 fragment type is
detected, the flow type is not changed to ICE_FLTR_PTYPE_FRAG_IPV4 from
ICE_FLTR_PTYPE_NONF_IPV4_OTHER. It will cause profile confilict with
other FDIR rules for IPv4 other type.
Fixes: b7e8781de7 ("net/ice: support flow director for IP fragment packet")
Cc: stable@dpdk.org
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The rte_eth_devices array is not in share memory, it should not be
referenced by ice_adapter which is shared by primary and secondary.
Any process set ice_adapter->eth_dev will corrupt another process'
context.
The patch removed the field "eth_dev" from ice_adapter.
Now, when the data paths try to access the rte_eth_dev_data instance,
they should replace adapter->eth_dev->data with adapter->pf.dev_data.
Fixes: f9cf4f8641 ("net/ice: support device initialization")
Cc: stable@dpdk.org
Reported-by: Yixue Wang <yixue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yixue Wang <yixue.wang@intel.com>
The flag use_avx2 and use_avx512 are defined as local variables, they
will not be aware by the secondary process, then wrong data path is
selected. Fix the issue by moving them into struct ice_adapter.
Fixes: ae60d3c9b2 ("net/ice: support Rx AVX2 vector")
Fixes: 2d5f6953d5 ("net/ice: support vector AVX2 in Tx")
Fixes: 7f85d5ebcf ("net/ice: add AVX512 vector path")
Cc: stable@dpdk.org
Reported-by: Yixue Wang <yixue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yixue Wang <yixue.wang@intel.com>
remove the VSI info from previous aggregator after moving the VSI to a
new aggregator.
Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
ice_aq_set_pfc_mode is used to configure DSCP.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Gordon Noonan <gordon.noonan@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When user configured the flow rule with raw packet via command
"flow_director_filter", it would reset all previous fdir input set
flags with "i40e_flow_set_fdir_inset()".
Ignore to configure the flow input set with raw packet rule used.
Fixes: ff04964ea6 ("net/i40e: fix flow director for common pctypes")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
In original implementation, device reconfiguration will generate
a new default RSS key if there is no one from user, it is unexpected
when updating a completely unrelated configuration.
This patch makes default RSS key unchanged, during the lifetime of the
DPDK application even if there are multiple reconfigurations.
Fixes: 50370662b7 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The array rss_key has size 'vf->vf_res->rss_key_size', the array index
should be less than that.
Cc: stable@dpdk.org
Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The existing log message is missing a space. Modified it to
a more meaningful log as part of this change.
Before this patch:
bnxt_dev_init(): bnxtfound at mem D67E0000, node addr 0x2101112000M
With this patch:
bnxt_dev_init(): Found bnxt device at mem D67E0000, node addr 0x2101112000M
Fixes: 1bf01f5135 ("net/bnxt: prevent device access when device is in reset")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
On Thor, driver must use HWRM to access the timestamp information.
Driver should not advertise PTP support to application
if PTP information is not available via HWRM commands.
Fixes: 6cbd89f9f3 ("net/bnxt: support PTP for Thor")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Use the function bnxt_vnic_destroy() to destroy VNIC resources
and thereby eliminate few duplicate code.
Fixes: 8d0a244b40 ("net/bnxt: cleanup VNIC after flow validate")
Fixes: 49d0709b25 ("net/bnxt: delete and flush L2 filters cleanly")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
During flow destroy, when bnxt_hwrm_tunnel_redirect_free() fails,
driver is not setting flow error using "rte_flow_error_set".
Fixes: 11e5e19695 ("net/bnxt: support redirecting tunnel packets to VF")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Resources should be freed on error conditions. i.e, VNIC and
VNIC context created in HW and memory allocated in
bnxt_vnic_grp_alloc() should be freed.
Added a new function bnxt_vnic_destroy() to do the cleanup.
This lightweight function can be used in flow destroy/flush
path to avoid duplicate code as well.
Fixes: d24610f7bf ("net/bnxt: allow flow creation when RSS is enabled")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Also removed a log message which does not convey any
useful information.
Fixes: d24610f7bf ("net/bnxt: allow flow creation when RSS is enabled")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In bnxt_flow_validate(), when bnxt_get_unused_filter() fails due to
no filter resources available, driver is not setting flow error using
"rte_flow_error_set".
Also, fixed the error code.
Fixes: 5ef3b79fdf ("net/bnxt: support flow filter ops")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The bnxt_vnic_prep() can fail due to multiple reasons.
But when bnxt_vnic_prep() fails, PMD is not returning
the actual error/string to the application.
Fix it by moving the "rte_flow_error_set" to bnxt_vnic_prep()
to set the actual error code.
Fixes: d24610f7bf ("net/bnxt: allow flow creation when RSS is enabled")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
There is a HW bug that can result in certain stats being reported as
zero.
Workaround this by ignoring stats with a value of zero based on the
previously stored snapshot of the same stat.
This bug mainly manifests in the output of func_qstats as FW aggregrates
each ring's stat value to give the per function stat and if one of
them is zero, the per function stat value ends up being lower than the
previous snapshot which shows up as a zero PPS value in testpmd.
Eliminate invocation of func_qstats and aggregate the per-ring stat
values in the driver itself to derive the func_qstats output post
accounting for the spurious zero stat value.
Bugzilla ID: 641
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
There is a rare hardware bug that can cause a bad opaque value in the RX
or TPA start completion. When this happens, the hardware may have used the
same buffer twice for 2 Rx packets. In addition, the driver might also
crash later using the bad opaque as an index into the ring.
The Rx opaque value is predictable and is always monotonically increasing.
The workaround is to keep track of the expected next opaque value and
compare it with the one returned by hardware during RX and TPA start
completions. If they miscompare, log it, discard the completion,
schedule a ring reset and move on to the next one.
Fixes: 0958d8b643 ("net/bnxt: support LRO")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The burst receive function should return all packets currently
present in the receive ring up to the requested burst size,
update vector mode receive functions accordingly.
Fixes: 3983583414 ("net/bnxt: support NEON")
Fixes: bc4a000f2f ("net/bnxt: implement SSE vector mode")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Make the definition of the table used to map hardware packet type
information to DPDK packet type more generic.
Add macro definitions for constants used in creating table
indices, use these to eliminate raw constants in code.
Add build-time assertions to validate ptype mapping constants.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Check that pointers are valid before using them.
Fixes: 7bc8e9a227 ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The VF reset can be triggered by the PF reset event, then the PCI bus
master will be cleared, the VF will be not allowed to issue any Memory
or I/O Requests.
So after the reset event is detected, always enable the PCI bus master.
And if failed, the device or system may be in an invalid state, so keep
the VF reset state to mark it as I/O error.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The VF reset can be triggered by the PF reset event, then the PCI bus
master will be cleared, the VF will be not allowed to issue any Memory
or I/O Requests.
So after the reset event is detected, always enable the PCI bus master.
And if failed, the device or system may be in an invalid state, so keep
the VF reset state to mark it as I/O error.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The modify_field flow API assumes that the META item is 32 bits wide.
But the C register that is used for META item can be 16 or 32 bits
wide depending on kernel and firmware configurations.
Take this into consideration and use the appropriate META width.
Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In the past, all the queues and other hardware objects were created
through Verbs interface. Currently, most of the objects creation are
migrated to Devx interface by default, including queues. Only when
the DV is disabled by device arg or eswitch is enabled, all or some
of the objects are created through Verbs interface.
When using Devx interface to create queues, the kernel driver
behavior is different from the case using Verbs. The Tx loopback
cannot work properly even if the Tx and Rx queues are configured
with loopback attribute. To fix the support self loopback for Tx, a
Verbs dummy queue pair needs to be created to trigger the kernel to
enable the global loopback capability.
This is only required when TIR is created for Rx and loopback is
needed. Only CQ and QP are needed for this case, no WQ(RQ) needs to
be created.
Bugzilla ID: 645
Fixes: 6deb19e1b2 ("net/mlx5: separate Rx queue object creations")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When the port is link down state, it is meaningless to display the
port link speed. It should be an undefined state.
Fixes: 59fad0f321 ("net/hns3: support link update operation")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Whether the enable bit of the pfc ("pfc_en") is changed or not is one of
the conditions for reconfiguring the DCB. Currently, pfc_en is not
rolled back when DCB configuration fails. This patch fixes it.
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the DCB configuration takes effect in the dev_start stage, and
the mapping between TCs and queues are also updated in this stage.
However, the DCB configuration is delivered in the dev_configure stage.
If the configuration fails, it should be intercepted in this stage. If
the configuration succeeds, the user should be able to obtain the
corresponding updated information, such as the mapping between TCs and
queues. So this patch moves DCB configuration to dev_configure.
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Packet buffer allocation and hardware pause configuration fail normally
when a reset occurs. If the execution fails, rollback of the packet
buffer still fails. So this rollback is meaningless.
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the "requested_fc_mode" lacks rollback when enabling link
FC or PFC fails.
For example, this may result an incorrect FC mode after a reset.
Fixes: d4fdb71a0e ("net/hns3: fix flow control mode")
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The Rx/Tx queue numbers should be greater than TC number, this patch adds
this check for PF before updating the mapping between TC and queue.
Fixes: a951c1ed3a ("net/hns3: support different numbers of Rx and Tx queues")
Fixes: 76d794566d ("net/hns3: maximize queue number")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The net/vhost PMD does not comply with the ethdev offload API as it does
not report Rx/Tx offload capabilities wrt TSO and checksum offloading.
On the other hand, the net/vhost PMD lets guest negotiates TSO and
checksum offloading.
Changing the behavior for Rx/Tx offload flags handling won't
improve/fix this situation and will break applications that might have
been relying on implicit support of TSO in this driver.
Revert this behavior change until we have a complete fix.
Fixes: ca7036b4af ("vhost: fix offload flags in Rx path")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
A meter may handle Rx queue reference in his sub-policies.
In stop operation, all the Rx queues are released.
Wrongly, the meter reference was not released before
destroying the Rx queues what cause an error in stop.
Release the Rx queues meter references in stop operation.
Fixes: fc6ce56bba ("net/mlx5: prepare sub-policy for flow with meter")
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, the counter offset support is discovered by creating the
rule with invalid offset counter and drop action in root table. If
the rule creation fails with EINVAL errno, that mean counter offset
is not supported in root table.
However, drop action may not be supported in some rdma-core version
in root table. In this case, the discover code will not work properly.
This commits changes flow attribute to egress. That removes all the
extra fate actions in the flow to avoid any unsupported fate actions
make the discover code fail time to time.
Fixes: 994829e695 ("net/mlx5: remove single counter container")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, configuring a mlx device, it will allocate its
own process private in mlx5_proc_priv_init() and only frees
it when closing the device. This will lead to a memory leak,
when a device is configured repeatedly.
For example:
for(...)
do
rte_eth_dev_configure
rte_eth_rx_queue_setup
rte_eth_tx_queue_setup
rte_eth_dev_start
rte_eth_dev_stop
done
Fixes: 120dc4a7dc ("net/mlx5: remove device register remap")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Currently, configuring a mlx device, it will allocate its
own process private in mlx5_proc_priv_init() and only frees
it when closing the device. This will lead to a memory leak,
when a device is configured repeatedly.
For example:
for(...)
do
rte_eth_dev_configure
rte_eth_rx_queue_setup
rte_eth_tx_queue_setup
rte_eth_dev_start
rte_eth_dev_stop
done
Fixes: 97d37d2c1f ("net/mlx4: remove device register remap")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Depending on the breakout mode of the physical ports the internal NFP
port number might differ from the actual physical port number. Prior to
this patch the physical port number was used when making configuration
changes to the physical ports (enable, admin up etc). After this change
the internal port number is now correctly used for configuration
changes.
Fixes: 5e15e799d6 ("net/nfp: create separate entity for PF device")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
When getting meter flow_id bits, there's an issue that not handling
correctly if flow_id is 0.
This fix this issue that when flow_id is 0, treat it as 1 bit.
Fixes: 83306d6c46 ("net/mlx5: fix meter statistics")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
One of the user parameters for the flow AGE action is the
action context. This context should be provided back to the
user when the action is aged-out.
While this context is NULL, a default value should be provided
by the PMD: the rte_flow pointer in case of rte_flow_create API
and the action pointer in case of the rte_flow_action_handle API.
The default for rte_flow_action_handle was set correctly,
while in case of rte_flow_create it wrongly remained NULL.
This patch set the default value for rte_flow_create case to be
the rte_flow pointer.
Fixes: f9bc5274a6 ("net/mlx5: allow age modes combination")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Current the ASO age action was supported in the non-root table,
and the counter based age action was be used in the root table.
The FDB table skips group 0 on MLX5 PMD by adding implicit rule
that jump to non-root table, but PMD code use the original group
value for checking.
This patch adds the transfer checking for ASO age action.
Fixes: f9bc5274a6 ("net/mlx5: allow age modes combination")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently RSS expansion only supports GRE and GRE KEY.
This patch adds RSS expansion for NVGRE item so PMD can expand flow item
correctly.
Fixes: ea81c1b816 ("net/mlx5: fix NVGRE matching")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
While there's mirror action prior to the meter action in the E-Switch
flow, means that the packets should be duplicated into port firstly,
and then do meter and send to the original destination.
MLX5 PMD will split the above E-Switch flow into two sub flows,
similar as mirror with modify action before.
Fixes: 07627fbf15 ("net/mlx5: support E-Switch mirroring with modify action")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In case of bonding, orchestrator wants to use same devargs for LAG and
non-LAG scenario to probe representor on PF1 using PF1 PCI address
like "<DBDF_PF1>,representor=pf1vf[0-3]".
This patch changes PCI address check policy to allow PF1 PCI address for
representors on PF1.
Note: detaching PF0 device can't remove representors on PF1. It's
recommended to use primary(PF0) PCI address to probe representors on
both PFs.
Fixes: f926cce3fa ("net/mlx5: refactor bonding representor probing")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The memory barrier is used to ensure that the response is returned
only after the Tx/Rx function is set, it should place after the Rx/Tx
function is set.
Fixes: 2aac5b5d11 ("net/mlx5: sync stop/start with secondary process")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The memory barrier is used to ensure that the response is returned
only after the Tx/Rx function is set, it should place after the Rx/Tx
function is set.
Fixes: 0203d33a10 ("net/mlx4: support secondary process")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reproduced with '--buildtype=debugoptimized' config,
compiler version: gcc (GCC) 12.0.0 20210509 (experimental)
There are multiple build errors, like:
In file included from ../drivers/net/tap/tap_flow.c:13:
In function ‘rte_jhash_2hashes’,
inlined from ‘rte_jhash’ at ../lib/hash/rte_jhash.h:284:2,
inlined from ‘tap_flow_set_handle’ at
../drivers/net/tap/tap_flow.c:1306:12,
inlined from ‘rss_enable’ at ../drivers/net/tap/tap_flow.c:1909:3,
inlined from ‘priv_flow_process’ at
../drivers/net/tap/tap_flow.c:1228:11:
../lib/hash/rte_jhash.h:238:9:
warning: ‘flow’ may be used uninitialized [-Wmaybe-uninitialized]
238 | __rte_jhash_2hashes(key, length, pc, pb, 1);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/tap/tap_flow.c: In function ‘priv_flow_process’:
../lib/hash/rte_jhash.h:81:1: note: by argument 1 of type ‘const void *’
to ‘__rte_jhash_2hashes.constprop’ declared here
81 | __rte_jhash_2hashes(const void *key, uint32_t length, uint32_t *pc,
| ^~~~~~~~~~~~~~~~~~~
../drivers/net/tap/tap_flow.c:1028:1: note: ‘flow’ declared here
1028 | priv_flow_process(struct pmd_internals *pmd,
| ^~~~~~~~~~~~~~~~~
Fix strict aliasing rule by using union.
Bugzilla ID: 690
Fixes: de96fe68ae ("net/tap: add basic flow API patterns and actions")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Reproduced with '--buildtype=debugoptimized' config,
compiler version: gcc (GCC) 12.0.0 20210509 (experimental)
There are multiple build errors, like:
../drivers/net/ice/base/ice_switch.c: In function ‘ice_add_marker_act’:
../drivers/net/ice/base/ice_switch.c:3727:15:
warning: array subscript ‘struct ice_aqc_sw_rules_elem[0]’
is partly outside array bounds of ‘unsigned char[52]’
[-Warray-bounds]
3727 | lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
| ^~
In file included from ../drivers/net/ice/base/ice_type.h:52,
from ../drivers/net/ice/base/ice_common.h:8,
from ../drivers/net/ice/base/ice_switch.h:8,
from ../drivers/net/ice/base/ice_switch.c:5:
../drivers/net/ice/base/ice_osdep.h:209:29:
note: referencing an object of size 52 allocated by ‘rte_zmalloc’
209 | #define ice_malloc(h, s) rte_zmalloc(NULL, s, 0)
| ^~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ice/base/ice_switch.c:3720:50:
note: in expansion of macro ‘ice_malloc’
lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
These errors are mainly because allocated memory is cast to
"struct ice_aqc_sw_rules_elem *" but allocated size is less than the size
of "struct ice_aqc_sw_rules_elem".
"struct ice_aqc_sw_rules_elem" has multiple other structs has unions,
based on which one is used allocated memory being less than the size of
"struct ice_aqc_sw_rules_elem" is logically correct but compiler is
complaining about it.
Since the allocation is done explicitly and both producer and consumer
are internal, safe to ignore the warnings. Also to prevent any side
affect disabling the compiler warning for now, until proper fix done.
Reducing the warning disable to gcc >= 11 version.
Bugzilla ID: 678
Fixes: c7dd159311 ("net/ice/base: add virtual switch code")
Fixes: 02acdce2f5 ("net/ice/base: add MAC filter with marker and counter")
Fixes: f89aa3affa ("net/ice/base: support removing advanced rule")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Fixed speed mode is not supported currently, this patch
removes configurations for this mode and adds fault handling
for ETH_LINK_SPEED_FIXED.
Fixes: 4f09bc55ac ("net/igc: implement device base operations")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Junfeng Guo <junfeng.guo@intel.com>
The kernel driver supports VF RSS configuration message
"VIRTCHNL_OP_GET_RSS_HENA_CAPS and VIRTCHNL_OP_SET_RSS_HENA",
this patch adds PMD support for these messages.
Fixes: b81295c474 ("net/i40e: add user callback for VF to PF message")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
In the loop, when the index of array "vsi->rss_key" is equal
to "vsi->rss_key_size", the array will be accessed out of bounds.
Fixes: 50370662b7 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The QW0 of Tx context descriptor should be reset to 0, otherwise the
previous hardware writeback value may pollute the next context descriptor
write.
Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For dev_ops not supported by the secondary process, either return -EPERM
or return without doing anything. In both cases log a warning.
It's still application's responsibility to avoid calls like that and
those changes are for debugging/informational purposes.
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
rte_pci_device and rte_eth_dev are process-local structures. Therefore
ena_adapter::pdev and ena_adapter::rte_dev cannot be used universally.
Both ena_timer_wd_callback and ena_interrupt_handler_rte needs access to
the rte_eth_dev, but as they are being setup and executed in the primary
process, it is safe to pass there the same pointer, which is used for
the device configuration.
In all other cases, except the eth_ena_dev_init(), the rte_eth_dev_data
is used instead.
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
To make it possible to the app to determine if the hash was calculated
for the packet or not, the PKT_RX_RSS_HASH should be set in the mbuf's
ol_flags.
As the PMD wasn't setting that, the application couldn't check if there
is a hash in a proper way.
The hash is valid only if it's UDP or TCP and the IP packet wasn't
fragmented.
Fixes: e5df9f33db ("net/ena: fix passing RSS hash to mbuf")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
Thread argument changed to wrong value during thread name addition,
fixing that bug.
Fixes: fdefe038eb ("net/ark: set generator delay thread name")
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tunnel offload API requires application to query PMD for specific flow
items and actions. Application uses these PMD specific elements to
build flow rules according to the tunnel offload model.
The model does not restrict private elements location in a flow rule,
but the current MLX5 PMD implementation expects that tunnel offload
rule will begin with PMD specific elements.
The patch removes that placement limitation.
Fixes: 4ec6360de3 ("net/mlx5: implement tunnel offload")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The admin-configured vNIC settings (i.e. via CIMC or UCSM) now include
Geneve offload. Use that setting to decide whether to enable or
disable Geneve offload and remove the devarg 'geneve-opt'.
Also, the firmware now allows the driver to change the Geneve port
number. So extend udp_tunnel_port_{add,del} to accept Geneve port, in
addition to VXLAN.
Fixes: 93fb21fdbe ("net/enic: enable overlay offload for VXLAN and GENEVE")
Cc: stable@dpdk.org
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
A terminated pthread should be joined or detached so that its associated
resources are released.
The "ice-reset-<vf_id>" threads are used to service some reset task in
the background, but they are never joined by the thread that created
them.
The easiest solution is to detach new threads.
The Windows EAL did not provide a pthread_detach wrapper but there is no
resource to release for Windows threads, so add an empty wrapper.
Fixes: 3b3757bda3 ("net/ice: get VF hardware index in DCF")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
A terminated pthread should be joined or detached so that its associated
resources are released.
The "ark-delay-pg" thread is just used to delay some task but it is never
joined by the thread that created it.
The easiest solution is to detach the new thread.
Fixes: 727b3fe292 ("net/ark: integrate PMD")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ed Czeck <ed.czeck@atomicrules.com>
If the FEC mode was not supported, it should return error code.
This patch also adds a space when log error info.
Fixes: 9bf2ea8dbc ("net/hns3: support FEC")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The memory barrier is used to ensure that the response is returned
only after the Tx/Rx function is set, it should place after the Rx/Tx
function is set.
Fixes: 23d4b61fee ("net/hns3: support multiple process")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This secondary process should not send request to start/stop Rx/Tx,
this patch fixes it.
Fixes: 23d4b61fee ("net/hns3: support multiple process")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The mailbox message id is uint8_t, but the unsupported mailbox message
id was logged by uint16.
Fixes: 463e748964 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The TM QCN error event should report by RAS other than MSIX.
Also this patch adds fifo int enable configuration before the TM QCN
error event is enabled.
Fixes: f53a793bb7 ("net/hns3: add more hardware error types")
Fixes: 3903c05382 ("net/hns3: remove read when enabling TM QCN error event")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Support to enable and disable QINQ hardware strip, when configure VLAN
offload with QINQ strip mask. If there are packets have QINQ tag to RSS,
users should enable QINQ strip before configuring the RSS.
Fixes: 220b0e49bc ("net/txgbe: support VLAN")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
The hardware doesn't support counting the number of bytes that through
the fdir rule. Therefore, the corresponding out parameters (e.g.
bytes_set/bytes) is set to zero.
Fixes: fcba820d9b ("net/hns3: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently in the VF reset scenario, the VF performs the set
alive operation before restoring the configuration completed,
which may cause the hardware to work in an abnormal state.
This patch fix this problem by set VF alive after restoring
the configuration is completed.
Fixes: a5475d61fa ("net/hns3: support VF")
Cc: stable@dpdk.org
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The fdir hash map hold the pointers of fdir rule elements, it needs to
be set to NULL when clear all fdir rules.
Fixes: fcba820d9b ("net/hns3: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
If clear FDIR rules fail, the error code was logged, but the error code
was useless because it was the sum of all fail code.
This patch fixes it by log the success cnt and fail cnt.
Fixes: fcba820d9b ("net/hns3: support flow director")
Fixes: 8eed8acc81 ("net/hns3: add error code to some logs")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Prior to this change, two implementations of rx_syscall_handler
existed although only one was needed (for the zero copy path which
is only available from kernel 5.4 and onwards). Remove the second
definition from compat.h and move the first definition back to where
it is called in the Rx function. Doing this removes a build warning
on kernels before 5.4 which complained about the second function
being defined but not used.
Fixes: 2aa51cdd55 ("net/af_xdp: fix trigger for syscall on Tx")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Before this change the dev_infos callback always reported RSS
capabilities regardless of whether the capability is supported by the
device or not. First check the capabilities field in the BAR of the
device and advertise RSS functionality accordingly.
Fixes: 8b945a7f7d ("drivers/net: update Rx RSS hash offload capabilities")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
This version update contains:
* memcpy mapping to the dpdk-optimized version.
* ena_com (HAL) update to the latest version (from 18.09.2020).
* Bug fixes for the large LLQ headers and devargs parsing.
* Bug fix for the default ring size.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Remove invalid ring size alignment logic and add default Rx and Tx port
ring sizes to the device info spec.
The logic in lines 1297 and 1371 is invalid. The
RTE_ETH_DEV_FALLBACK_RX_RINGSIZE (and the TX counterpart) is a value
that rte_eth_rx_queue_setup() will set if
dev_info.default_rxportconf.ring_size is 0 and user provided 0 in
nb_rx_desc argument. However the current code treats it as a hint for
the PMD to change the ring size to internal defaults.
Additionally since the ENA_DEFAULT_RING_SIZE is defined, report it in
the device capabilities so that both rte_ethdev code and the user can
utilize it for device configuration.
Fixes: ea93d37eb4 ("net/ena: add HW queues depth setup")
Cc: stable@dpdk.org
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
pthread_cond_timedwait() may spuriously wakeup according to POSIX.
Therefore it is required to check whether predicate is actually true
before finishing the waiting loop.
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
swap*_*_le() functions are not used anywhere and besides there are rte
alternatives already present.
Fixes: 1173fca25a ("ena: add polling-mode driver")
Cc: stable@dpdk.org
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
As the documentation of rte_kvargs_parse() states, the valid_keys
argument must be NULL terminated. Lack of this feature may cause
segmentation fault if the passed devarg will be different then the
supported value.
Fixes: 8a7a73f26c ("net/ena: support large LLQ headers")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The code incorrectly checked the return value of comparison when parsing
the argument key name. The return value of strcmp should be compared
to 0 to identify a match.
Fixes: 8a7a73f26c ("net/ena: support large LLQ headers")
Cc: stable@dpdk.org
Signed-off-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
1. As memzone uses unique names, changed alloc coherent macro to use
64 bit size atomic variable to increase the memzone name space
2. "handle" param name change to be consistent with other macros
3. Variable definition displacement
4. Backslash alignment to column 80
Signed-off-by: Amit Bernstein <amitbern@amazon.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>