Take into account VLAN presence fields in items ETH and VLAN.
Provided that the item ETH does not match on the EtherType,
the pattern behaviour will be as follows:
- ETH (mask->has_vlan = 0) | IPv4 = match both tagged and untagged;
- ETH (mask->has_vlan = 1) | IPv4 = match as per spec->has_vlan;
- ETH (mask->has_vlan = 0) | VLAN | IPv4 = match only tagged.
Similar logic applies to double tagging.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Introduce necessary infrastructure for these fields to
be set, validated and compared during class comparison.
Enumeration and mappings envisaged are MCDI-compatible.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
In poll mode driver of octeontx2 the RQ is connected to a CQ and it is
responsible for asserting backpressure to the CGX channel.
When event eth Rx adapter is configured, the RQ is connected to a event
queue, to enable backpressure we need to configure AURA assigned to a
given RQ to backpressure CGX channel.
Event device expects unique AURA to be configured per ethernet device.
If multiple RQ from different ethernet devices use the same AURA,
the backpressure will be disabled, application can override this
using devargs:
-a 0002:0e:00.0,force_rx_bp=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Shift is used to generate an 8-bit saturate value from the current
aura used count. The shift value should be derived from the log2 of
block count if it is greater than 256 else the shift should be 0.
Fixes: 7bcc47cbe2 ("mempool/octeontx2: add mempool alloc op")
Cc: stable@dpdk.org
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Set the appropriate capability flags for the RX, crypto and timer
eventdev adapters to use.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Tested-by: Heng Wang <heng.wang@ericsson.com>
clang-10 build issue log:
drivers/event/cnxk/cnxk_tim_worker.h:372:23:
warning: value size does not match register size
specified by the constraint and modifier [-Wasm-operand-widths]
: [rem] "=&r"(rem)
^
cnxk/cnxk_tim_worker.h:365:17: note: use constraint modifier "w"
"ldxr %[rem], [%[crem]] \n"
^~~~~~
%w[rem]
Changed variable type to match register size, which placates clang.
Fixes: 300b796262 ("event/cnxk: add timer arm routine")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Packet was corrupted when TSO requested without CSUM update.
Enables CSUM automatically if only TSO requested.
Fixes: 2aa8444b00 ("vdpa/mlx5: support stateless offloads")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch adds default RSS support for IPv4 and IPv6 fragment packet.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The rte_eth_devices array is not in share memory, it should not be
referenced by i40e_adapter which is shared by primary and secondary.
Any process set i40e_adapter->eth_dev will corrupt another process's
context.
The patch removed the field "eth_dev" from i40e_adapter.
Now, when the data paths try to access the rte_eth_dev_data instance,
they should replace adapter->eth_dev->data with adapter->pf.dev_data.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For i40e vector Tx path, if tx_offload is set as FAST_FREE_MBUF mode,
no mbuf fast free operations are executed. To fix this, add mbuf fast
free mode for vector Tx path.
Furthermore, for i40e vector Tx path, if implement FAST_FREE_MBUF mode,
it means per-queue all mbufs come from the same mempool and have
refcnt = 1. Thus we can use bulk free of the buffers when mbuf fast free
mode is enabled.
For vector path in arm platform:
In n1sdp, performance is improved by 18.4%;
In thunderx2, performance is improved by 23%.
For vector path in x86 platform:
No performance changes.
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means
per-queue all mbufs come from the same mempool and have refcnt = 1.
Thus we can use bulk free of the buffers when mbuf fast free mode is
enabled.
Following are the test results with this patch:
MRR L3FWD Test:
two ports & bi-directional flows & one core
RX API: i40e_recv_pkts_bulk_alloc
TX API: i40e_xmit_pkts_simple
ring_descs_size = 1024;
Ring_I40E_TX_MAX_FREE_SZ = 64;
tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH = 32;
tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH = 32;
For scalar path in arm platform with default 'tx_rs_thresh':
In n1sdp, performance is improved by 7.9%;
In thunderx2, performance is improved by 7.6%.
For scalar path in x86 platform with default 'tx_rs_thresh':
performance is improved by 4.7%.
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The HW debug mask is always zero, so user can't enable the related debug
function like ICE_DBG_XXX etc, add the devarg 'hw_debug_mask' to set the
debug mask log output at runtime.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Currently, there'll be conflict error when running
the following commands:
1. flow create 0 ingress
pattern eth / ipv4 / udp src is 32 / end
actions queue index 2 / end
2. flow destroy 0 rule 0
3. flow create 0 ingress
pattern eth / ipv4 / udp dst is 32 / end
actions queue index 2 / end
This patch fixes the input set conflict issue.
Fixes: 42044b69c6 ("net/i40e: support input set selection for FDIR")
Fixes: 4a072ad434 ("net/i40e: fix flow director config after flow validate")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Lingli Chen <linglix.chen@intel.com>
Add check in the Tx packet preparation function, to guarantee that the
packet with specific user priority is distributed to the correct Tx
queue according to the configured Tx queue TC mapping.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch added the support for VF to config the ETS-based Tx QoS,
including querying current QoS configuration from PF and config queue TC
mapping. PF QoS is configured in advance and the queried info is
provided to the user for future usage. VF queues are mapped to different
TCs in PF through virtchnl.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch supports the ETS-based QoS configuration. It enables the DCF
to configure bandwidth limits for each VF VSI of different TCs. A
hierarchy scheduler tree is built with port, TC and VSI nodes.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When link status changes, DCF will receive virtchnl PF event message.
Add support to handle this event, change link status and update link
info.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
In the adminq command query port ETS function, the root node teid is
needed. However, for DCF, the root node is not initialized, which will
cause error when we refer to the variable. In this patch, we will check
whether the root node is available or not first.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds new virtchnl opcodes and structures for QoS
configuration, which includes:
1. VIRTCHNL_VF_OFFLOAD_TC, to negotiate the capability supporting QoS
configuration. If VF and PF both have this flag, then the ETS-based QoS
offload function is supported.
2. VIRTCHNL_OP_DCF_CONFIG_BW, DCF is supposed to configure min and max
bandwidth for each VF per enabled TCs. To make the VSI node bandwidth
configuration work, DCF also needs to configure TC node bandwidth
directly.
3. VIRTCHNL_OP_GET_QOS_CAPS, VF queries current QoS configuration, such
as enabled TCs, arbiter type, up2tc and bandwidth of VSI node. The
configuration is previously set by DCB and DCF, and now is the potential
QoS capability of VF. VF can take it as reference to configure queue TC
mapping.
4. VIRTCHNL_OP_CONFIG_TC_MAP, set VF queues to TC mapping for all Tx and
Rx queues. Queues mapping to one TC should be continuous and all
allocated queues should be mapped.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a specific path for RX AVX2.
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Add a specific path for TX AVX2.
In this path, support the HW offload features, like,
checksum insertion, VLAN insertion.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Since each PF does not share the same structure space, the first
mask value should start at 0 instead of hw->pf_id * per_pf to avoid
address overflow. Otherwise, address space will overlap when
masks.first + masks.count > ICE_PROF_MASK_COUNT, and it may lead to
unexpected variable assignment, which causes segmentation fault.
Fixes: 9467486f17 ("net/ice/base: enable masking for RSS and FD field vectors")
Cc: stable@dpdk.org
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
i40e can support following rss hash function types: default/toeplitz,
symmetric toeplitz, and simple_xor. However, when filter engine parses
pattern action, it only supports symmetric toeplitz & default.
Add simple_xor and toeplitz hash functions support when parsing pattern
action.
Fixes: ef4c16fd91 ("net/i40e: refactor RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The len variable, used in the computation of max_pkt_len could overflow,
if used to store the result of the following computation:
ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len
Since, we could define the mbuf size to have a large value (i.e 13312),
and ICE_SUPPORT_CHAIN_NUM is defined as 5, the computation mentioned
above could potentially result in a value which might be bigger than
MAX_USHORT.
The result will be that Jumbo Frames will not work properly
Fixes: 1b009275e2 ("net/ice: add Rx queue init in DCF")
Cc: stable@dpdk.org
Signed-off-by: Tudor Cornea <tudor.cornea@keysight.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The flag use_avx2 and use_avx512 are defined as local variables, they
will not be aware by the secondary process, then wrong data path is
selected. Fix the issue by moving them into struct i40e_adapter.
Fixes: 6ada10deac ("net/i40e: remove devarg use-latest-supported-vec")
Fixes: e6a6a13891 ("net/i40e: add AVX512 vector path")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Support AVF RSS for inner header of GRE tunnel packet. It supports
RSS based on fields inner IP src + dst address and TCP/UDP src + dst
port.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a virtchnl protocol header type to support AVF FDIR and RSS for GRE.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds IPV4/IPV6 SRC/DST input set for ESP/NAT_T_ESP to
support outer IP match.
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds IPV4/IPV6 SRC/DST input set for ESP to support
outer IP match.
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
mz is known to be NULL. Do not use it to print a memzone name.
Fixes: 242e18c056 ("net/octeontx_ep: add Rx queue setup and release")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Enable PTP offload in vector Tx burst function. Since, we can
no-longer use a single LMT line for burst of 4, split the LMT
into two and transmit twice.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Enable PTP offload in vector Rx burst function, use vector path
for processing mbufs and finally switch to scalar when extracting
timestamp.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Adding a new callback for reading the link status. PF can read it's
link status and can forward the same to VF once it comes up.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Currently link event is only sent to the PF by AF as soon as it comes
up, or in case of any physical change in link. PF will broadcast
these link events to all its VFs as soon as it receives it.
But no event is sent when a new VF comes up, hence it will not have
the link status.
Adding support for sending link status to the VF once it comes up
successfully.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Currently LSO formats setup initially are expected to be
compile time constants and start from 0.
Change the logic in slow and fast path so that LSO format indexes
are only determined runtime.
Fixes: 3b635472a9 ("net/octeontx2: support TSO offload")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
CN96xx and CN98xx have 4096 and 16384 MCAM entries respectively.
Aligning the code with the same numbers.
Fixes: 092b383418 ("net/octeontx2: add flow init and fini")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for DEV_TX_OFFLOAD_MBUF_FAST_FREE for inline IPsec path
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Patch adds multicast filter support for cn9k and cn10k platforms.
CGX DMAC filter table(32 entries) is divided among all LMACs
connected to it i.e. if CGX has 4 LMACs then each LMAC can have
up to 8 filters. If CGX has 1 LMAC then it can have up to 32
filters.
Above mentioned filter table is used to install unicast and multicast
DMAC address filters. Unicast filters are installed via
rte_eth_dev_mac_addr_add API while multicast filters are installed
via rte_eth_dev_set_mc_addr_list API.
So in total, supported MAC filters are equal to DMAC filters plus
mcast filters.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Once PTP status is changed at H/W i.e. enable/disable then
it is propagated to user via registered callback.
So corresponding callback is registered to get PTP status.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Adding initial version of rte_flow support for cnxk family device.
Supported rte_flow ops are flow_validate, flow_create, flow_destroy,
flow_flush, flow_query, flow_isolate.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Adding support to configure NPC on device initialization. This involves
reading the MKEX and initializing the necessary data.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Once mbufs are transmitted, mbufs are freed by H/W. No mbufs are
accumalated as a pending mbuf.
Hence operation is NOP for cnxk platform.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
CN9K and CN10K support platform specific mempool ops.
This patch implements API to validate whether given mempool
ops is supported or not.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Application may choose to enable/disable interrupts on Rx queues
so that application can select its processing if no packets are
available on queues for a longer period.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Add device start and stop operation callbacks for
CN9K and CN10K. Device stop is common for both platforms
while device start as some platform dependent portion where
the platform specific offload flags are recomputed and
the right Rx/Tx burst function is chosen.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add support for packet type lookup on Rx to translate HW
specific types to RTE_PTYPE_* defines
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
aDD tx queue setup and release for CN9K and CN10K.
Release is common while setup is platform dependent due
to differences in fast path Tx queue structures.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add Rx queue setup and release op for CN9K and CN10K
SoC. Release is completely common while setup is platform
dependent due to fast path Rx queue structure variation.
Fastpath is platform dependent partly due to core cacheline
size difference.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device configuration op for CN9K and CN10K. Most of the
device configuration is common between two platforms except for
some supported offloads.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add various devargs parsing command line arguments
parsing functions supported by CN9K and CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add platform specific probe and remove callbacks for CN9K
and CN10K which use common probe and remove functions.
Register ethdev driver for CN9K and CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add build infrastructure and common probe and remove for cnxk driver
which is used by both CN10K and CN9K SoC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>