Add below ptypes into ice_ptypes_mac_ofos:
MAC_IPV4[6]_ESP
MAC_IPV4[6]_AH
MAC_IPV4[6]_NAT_T_ESP
MAC_IPV4[6]_NAT_T_IKE
MAC_IPV4[6]_NAT_T_KEEP
MAC_IPV4[6]_PFCP_NODE
MAC_IPV4[6]_PFCP_SESSION
MAC_IPV4[6]_L2TPV3
So above ptype can also be selected by a filter when outer mac header
is required.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Added NVM Write Admin Command (0x703) ARQ response flags - as
returned in "Response flags" field.
Three flags are supported: POR, PERST and EMPR. All indicate the
type of reset required to get the NVM bank update effective.
Signed-off-by: Shay Amir <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add struct to store outer part for tunnel rule.
Add vxlan ptype in ipv4 mac bitmap. So when create a vxlan rule, the
ptype group will be valid.
Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Implement interface 'ice_rss_hash_conf_get' to support show RSS
hash configuration.
Note:
Only return rss_hf from latest dev_configure or dev_rss_hash_update.
All configures from rte_flow are ignored.
Signed-off-by: Tao Zhu <taox.zhu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
To enhance the per-core performance, this patch adds some AVX512
instructions to the data path to handle the Tx descriptors.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
To enhance the per-core performance, this patch adds some AVX512
instructions to the data path to handle the flexible Rx descriptors.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
To enhance the per-core performance, this patch adds some AVX512
instructions to the data path to handle the legacy Rx descriptors.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The initialization of selecting the handler for scalar Rx path FlexiMD
fields extraction into mbuf is missed, it will cause segmentation fault
(core dumped).
Also add the missed support to handle RXDID 16, which has RSS hash value
on Qword 1.
Fixes: 7a340b0b4e03 ("net/ice: refactor Rx FlexiMD handling")
Cc: stable@dpdk.org
Reported-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Update reading offload flags of last two of four packets.
Fixes: 1162f5a0ef31 ("net/iavf: support flexible Rx descriptor in SSE path")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Update reading offload flags of last two of four packets.
Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Currently, can't create more than one following flow for
ETH + VLAN pattern.
1. flow create 0 ingress pattern eth / vlan vid is 350 / end
actions queue index 2 / end
2. flow create 0 ingress pattern eth / vlan vid is 351 / end
actions queue index 3 / end
The root cause is the vlan_tci is not set correctly, it will
cause the keys of both of the two flows are the same.
Fixes: 42044b69c67d ("net/i40e: support input set selection for FDIR")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Supports check the status of Rx and Tx descriptors.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add rte_eth_dev_info->rx_seg_capa parameters:
- receiving to multiple pools is supported
- buffer offsets are supported
- no offset alignment requirement
- reports the maximal number of segments
- reports the buffer split offload flag
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Only the regular rx_burst routine is updated to support split,
because the vectorized ones does not support scatter and MPRQ
does not support split at all.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The split feature for receiving packets was added to the mlx5
PMD, now Rx queue can receive the data to the buffers belonging
to the different pools and the memory of all the involved pool
must be registered for DMA operations in order to allow hardware
to store the data.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The scatter-gather elements should be configured
accordingly to support the buffer split feature.
The application provides the desired settings for
the segments at the beginning of the packets and
PMD pads the buffer chain (if needed) with attributes
of last specified segment to accommodate the packet
of maximal length.
There are some limitations are implied. The MPRQ
feature should be disengaged if split is requested,
due to MPRQ neither supports pushing data to the
dedicated pools nor follows the flexible buffer sizes.
The vectorized rx_burst routines does not support
the scattering (these ones are extremely simplified
and work over the single segment only) and can't
handle split as well.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The routine to provide Rx queue setup with specifying
extended receiving buffer description is added.
It allows application to specify desired segment
lengths, data position offsets in the buffer
and dedicated memory pool for each segment.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
- apply matches to both outer and inner packet headers
during entire offload procedure;
- restore outer header of partially offloaded packet;
- model is implemented as a set of helper functions.
Implementation details:
* tunnel_offload PMD parameter must be set to 1 to enable the feature.
* application cannot use MARK and META flow actions with tunnel.
* offload JUMP action is restricted to steering tunnel rule only.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Implement shared action create/destroy/update/query. The current
implementation support is limited to shared RSS action only. The shared
RSS action create operation prepares hash RX queue objects for all
supported permutations of the hash. The shared RSS action update
operation relies on functionality to modify hash RX queue introduced in
one of the previous commits in this patch series.
Implement RSS shared action and handle shared RSS on flow apply and
release. The lookup for hash RX queue object for RSS action is limited
to the set of objects stored in the shared action itself and when
handling shared RSS action. The lookup for hash RX queue object inside
shared action is performed by hash only.
Current implementation limited to DV flow driver operations i.e. verbs
flow driver operations doesn't support shared action.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Handle shared action on flow validation/creation/destruction.
mlx5 PMD translates shared action into a regular one before handling
flow validation/creation. The shared action translation applied to
utilize the same execution path for both shared and regular actions.
The current implementation supports shared action translation for shared
RSS action only.
RSS action validation split to validate shared RSS action on its
creation in addition to action validation in flow validation/creation
path.
Implement rte_flow shared action API for mlx5 PMD, mostly forwarding
calls to flow driver operations (see struct mlx5_flow_driver_ops).
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Implement modification for hashed table of Rx queue object (see
mlx5_hrxq_modify()). This implementation relies on the capability to
modify TIR object via DevX API, i.e. current implementation doesn't
support verbs HW object operations. The functionality to modify hashed
table of Rx queue object is prerequisite to implement
rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Make SIMD initialization code less verbose by using appropriate
intrinsics when all lanes of a vector are initialized to the
same value.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Just like iavf, setting the value to 2us allows for generally good
streaming packet performance while keeping latency down, and
generally keeps the performance of the PF and VF interfaces similar.
The previous value of 0x10 was making latency on a single packet
receive be as much as 16us.
Fixes: 65dfc889d86b ("net/ice: support Rx queue interruption")
Cc: stable@dpdk.org
Reported-by: Brian Johnson <brian.johnson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The iavf driver was trying to use writeback on ITR, but was
never setting an ITR, so it didn't work. This caused performance
to be limited due to too much PCIe traffic and partial writes
during most benchmarking workloads.
Set the ITR during queue setup, which can be checked at runtime
by reading register 0x2800. Setting the value to 2us allows
for generally good streaming packet performance while keeping
latency down.
Fixes: d6bde6b5eae9 ("net/avf: enable Rx interrupt")
Cc: stable@dpdk.org
Reported-by: Brian Johnson <brian.johnson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add RSS configure, support to RSS hash and reta operations for PF.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add check operation for vf function level reset,
mailbox messages and ack from vf.
Waiting to process the messages.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>