Added changes to base rule install mechanism.
If action type is IPsec and multi channel is set,
then base rule will not be merged.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Satheesh Paul <psatheesh@marvell.com>
In vector mode, DF bit is not programmed correctly, as the
return value of vsetq_lane_u64() is ignored, which actually
contains the updated value, leading HW to free mbufs though
NIX_TX_OFFLOAD_MBUF_NOFF_F flag is set.
Hence, save return value of vsetq_lane_u64() appropriately so
that DF bit is programmed correctly.
Fixes: 862e281287 ("net/cnxk: add vector Tx for CN9K")
Fixes: f71b7dbbf0 ("net/cnxk: add vector Tx for CN10K")
Cc: stable@dpdk.org
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
While configuring NIX, local variables 'nb_rxq' and 'nb_txq'
are declared as 8bit variables, leading to an integer overflow
when an application sends Rxq/Txq value greater than 255.
Hence, declare local variables, 'nb_rxq' and 'nb_txq' as 16bit
variable. Also, during the cleanup, make sure PFC tree is not created.
Fixes: b75e0aca84 ("net/cnxk: add device configuration operation")
Cc: stable@dpdk.org
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
If the length of data_len in mbuf is less than 17 or
greater than the maximum frame size, it is illegal.
These illegal packets will lead to Tx/Rx hang and
can't recover automatically.
This patch check those illegal packets and protect
Tx/Rx from hanging.
Fixes: 17c7d0f9d6 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If the length of data_len in mbuf is less than 17 or
greater than the maximum frame size, it is illegal.
These illegal packets will lead to Tx/Rx hang and
can't recover automatically.
This patch check those illegal packets and protect
Tx/Rx from hanging.
Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Cc: stable@dpdk.org
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If a meter's profile is changed after meter disabled, there's an issue
that will fail when deleting the old profile.
This patch fixes this by adding the correct process to decrease the old
profile's reference count when changing profile.
Fixes: 63ffeb2ff2 ("net/mlx5: support meter profile update")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, when flow usese meter hierarchy, a tag action is always applied
to set the first meter's meter id, so as to update the first meter's drop
count. But it's not considered if first meter doesn't have drop count.
This patch fixes it, that in hierarchy, if the first meter doesn't have
drop count, no need to add the meter id tag action. No change for
non-hierarchy meter.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If there's no param in represented_port item, it will be treated as
matching all ports by default. But there's some limitation when using it
with meter hierarchy.
This patch adds the limitation that when matching all ports, the meter
hierarchy should not contain any meter having drop count.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
There is a new item type represented_port, and currently it will fail
when using meter hierarchy in flow using represented_port match.
This patch fixes this fail by adding support for represented_port item
in meter hierarchy flow split.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The driver splits the flow with sample action into two sub-flows,
sub prefix flow and sub suffix flow.
In the case of tunnel flow including a decap action, the driver should
translate the inner as outer for actions coming after the decap action.
In the case of flow splitting, the packet layers, used to detect the
attributes, are inherited from the prefix flow to the suffix flow but
the driver wrongly didn't handle the decap adjustment and the inner
layers didn't shift to the outer.
This patch adjusts the inherited layers in case of decap.
Fixes: 6e77151286 ("net/mlx5: fix match information in meter")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If hardware descriptor (WQE) length exceeds one the HW can handle,
the Tx queue failure occurs. PMD does the length check but there was
a bug - the length limit was expressed in 16B units (WQEBB segments),
while the calculated WQE length and limit were in 64B units (WQEBBs).
Fix the condition to avoid subsequent Tx queue failure.
Fixes: 18a1c20 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The hardware descriptor (WQE) length field is 6 bits wide
and we have the native limitation for the overall descriptor
length. To improve the PCIe bandwidth the packet data can be
inline into descriptor. If PMD was configured to inline large
amount of data it happened there was no enough space remaining
in the descriptor to specify all the packet data segments and
PMD rejected problematic packets.
The patch tries to adjust the inline data length conservatively
and allows to avoid error occurring.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Fixes: e2259f93ef ("net/mlx5: fix Tx when inlining is impossible")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
The mlx5 PMD can inline packet data into transmitting descriptor (WQE)
and free mbuf immediately as data no longer needed, for non-inline
packets the mbuf pointer should be stored in elts array for coming
freeing on send completion. There was an optimization on storing
pointers in batch and there was missed storing mbuf for single
packet if non-inline was explicitly requested by flag.
Fixes: cacb44a099 ("net/mlx5: add no-inline Tx flag")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mlx5 PMD supports send scheduling feature, it allows
to send packets at specified moment of time, to do that
PMD pushes special wait descriptor (WQE) to the hardware
queue and then pushes descriptor for packet data as usual.
If queue is close to be full or there is no enough elts
buffers to store mbufs being sent the data descriptors might
be not pushed and the orphan wait WQE (not followed by the
data) might reside in queue on tx_burst routine exit.
To avoid orphan wait WQEs there was the check for enough
free space in the queue WQE buffer and enough amount of the
free elts in queue mbuf storage. This check was incomplete
and did not cover all the cases for Enhanced Multi-Packet
Write descriptors.
Fixes: 2f827f5ea6 ("net/mlx5: support scheduling on send routine template")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
ESP is one of IPSec protocols over both IPv4 and IPv6 and is considered
a tunnel layer that cannot be followed by any other layer. Taking that
into consideration, ESP is considered as a 4 layer.
Not defining ESP's priority will make it match with the same priority as
its prior IP layer, which has a layer 3 priority. This will lead to
issues in matching and will match the packet with the first matching
rule even if it doesn't have an esp layer in its pattern, disregarding
any following rules that could have an esp item and can be actually
a more accurate match since it will have a longer matching criterion.
This is fixed by defining the priority for the ESP item to have a
layer 4 priority, making the match be for the rule with the more
accurate and longer matching criteria.
Fixes: 18ca4a4ec7 ("net/mlx5: support ESP SPI match and RSS hash")
Cc: stable@dpdk.org
Signed-off-by: Bassam Zaid AlKilani <bzalkilani@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
One of the conditions to allow LRO offload is the DV configuration.
The function incorrectly checks the DV configuration before initializing
it by the user devarg; hence, LRO cannot be allowed.
This patch moves this check to mlx5_shared_dev_ctx_args_config, where DV
configuration is initialized.
Fixes: c4b8620135 ("net/mlx5: refactor to detect operation by DevX")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reported-by: Gal Shalom <galshalom@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Current PMD logs all DevX errors at error level.
DevX interface can fail queue counters allocation on some hardware
types. That is a known issue.
PMD fallbacks to Verbs API to allocate queue counters
when it detects the fault.
That DevX failure should not be logged as PMD error.
The patch provides DevX with flexible API that selects log level.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
FDs passed from rte_mp_msg are duplicated to the secondary process and
need to be closed.
Fixes: 9a8ab29b84 ("net/mlx5: replace IPC socket with EAL API")
Cc: stable@dpdk.org
Signed-off-by: Long Li <longli@microsoft.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
FDs passed from rte_mp_msg are duplicated to the secondary process and
need to be closed.
Fixes: 0203d33a10 ("net/mlx4: support secondary process")
Cc: stable@dpdk.org
Signed-off-by: Long Li <longli@microsoft.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In case of higher order (greater than 99) logical cores, name was
truncated (length is restricted to 16 characters, including the
terminating null byte ('\0')) and it makes hard to follow threads.
Before this fix, this issue can be reproduced using following arguments:
--lcores=0,10@1,100@2
Then we had:
lcore-worker-10
lcore-worker-10
Signed-off-by: Abdullah Ömer Yamaç <omer.yamac@ceng.metu.edu.tr>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Checking a const pointer for alignment would emit a warning about the
const qualifier being discarded.
No need to calculate the aligned pointer; just check the last bits of the
pointer.
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The rte_mov256 function was missing for AVX2.
Fixes: 9144d6bcde ("eal/x86: optimize memcpy for SSE and AVX")
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In async_enqueue_pkts(), the failed pkts will
be freed before return, but, the failed pkts may be
retried later, it will cause use after free. So,
we free the failed pkts after retry.
Fixes: 1907ce4bae ("examples/vhost: fix retry logic on Rx path")
Cc: stable@dpdk.org
Signed-off-by: Wenwu Ma <wenwux.ma@intel.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
If an application does not request IOMMU support, we can avoid
allocating a IOMMU pool.
This saves 112kB (IOTLB_CACHE_SIZE * sizeof(struct vhost_iotlb_entry))
per vq.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Currently in function vhost_user_msg_handler, variable ret is used to
store both vhost msg result code and function call return value.
After this patch, variable ret is used only to store function call
return value, a new dedicated variable msg_result is used to
store vhost msg result. This can improve readability.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
When first attempt to configure a device with RX interrupt enabled fails
for some reason (e.g. because "Multiple intr vector not supported"),
second attempt to configure the device with RX interrupt disabled and
feature set unchanged will succeed but will leave virtio queues not
allocated. Accessing the queues will cause a segfault.
First attempt:
- virtio_dev_configure()
- virtio_init_device() is called to reinit the device because
"dev->data->dev_conf.intr_conf.rxq" is "1"
- virtio_configure_intr() fails and returns an error
- virtio_free_queues() frees previously allocated virtio queues
- virtio_init_device() fails and returns an error
- virtio_dev_configure() fails and returns an error
Second attempt:
- virtio_dev_configure()
- This time virtio_init_device() is not called, virtio queues
are not allocated
With this fix, reinit the device during configuration if virtio queues
are not allocated.
Fixes: 2b38151f74 ("net/virtio: fix queue memory leak on error")
Cc: stable@dpdk.org
Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Those helpers have been marked as deprecated for a long time and have
documented equivalent helpers.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch is to enable scalar path inner and outer Tx checksum offload
for tunnel packet by configure ol_flags.
Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When receiving tunneled packets, the testpmd output log shows 'ol_flags'
value always as 'RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN', but expected value
should be 'RX_OUTER_L4_CKSUM_GOOD' or 'RX_OUTER_L4_CKSUM_BAD'.
Adding 'RX_OUTER_L4_CKSUM_GOOD' and 'RX_OUTER_L4_CKSUM_BAD' to 'flags' for
normal path, 'l3_l4_flags_shuf' for AVX2 and AVX512 vector path and
'cksum_flags' for SSE vector path to ensure that the 'ol_flags'
can match correct flags.
Fixes: b8b4c54ef9 ("net/iavf: support flexible Rx descriptor in normal path")
Fixes: 1162f5a0ef ("net/iavf: support flexible Rx descriptor in SSE path")
Fixes: 5b6e885908 ("net/iavf: support flexible Rx descriptor in AVX path")
Fixes: 9c9aa00403 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
Cc: stable@dpdk.org
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a known issue: configuring VLAN filters from VF is unsupported
for i40e driver 2.17.15.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The SSE RX path does not process the vlan tci correctly when it's stored
in L2TAG2, so the vlan tci could not be extracted from descriptor,
then would not be put into mbuf either.
Add processing when vlan tci is stored in L2TAG2.
Fixes: 1162f5a0ef ("net/iavf: support flexible Rx descriptor in SSE path")
Cc: stable@dpdk.org
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Because of base code update, "ice_ptp_init_phc" API for E810/E822 depends
on PHY configuration, not whether the device is E810 based. So before this
API is called, assign specific value to phy cfg.
Fixes: 646dcbe6c7 ("net/ice: support IEEE 1588 PTP")
Cc: stable@dpdk.org
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Move all VF related limitation or known issues from i40e.rst to
intel_vf.rst, as i40evf has been removed from i40e, i40e.rst should only
cover PF's information.
The patch also fix couple typos and refine the words to be more accurate.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Updated copyright to 2022 and update base code version.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Exposed ice_aq_move_sched_elems to support sched element moving
by AQ command.
Signed-off-by: Ben Shelton <benjamin.h.shelton@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
PTP HW semaphore can be held for ~50 ms in worst case.
SW should wait longer and check more frequently if
the HW lock is held.
Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Added GTP tunnel type and also re-order the code to align with
kernel driver.
Signed-off-by: Marcin Szycik <marcin.szycik@intel.com>
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Remove unnecessary fields in data structure for 1588 and QoS
func capabilities.
Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Use bitfields in 1588 structs so they don't waste too much space.
Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
ICE_SW_LKUP_DFLT is handled in ice_update_vsi_list_rule and
ice_aq_alloc_free_vsi_list.
Signed-off-by: Lukasz Kupczak <lukasz.kupczak@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
LLDP MIB Change Event (opcode 0x0A01) already contains MIB, which
has been changed. Add ice_dcb_process_lldp_set_mib_change() function,
which will set local/remote DCBX config from LLDP MIB Change Event's
buffer.
This function will be used in a base driver handler for LLDP MIB
Change Event.
Signed-off-by: Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The bit that is overloaded is bit 11 in the flex descriptor,
updating the comment to have the right one reflected.
Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Admin queue command for shutdown AQ contains flag to indicate
driver unload. However the flag is always set in driver, even
for the resets. It causes Firmware to consider driver as unloaded
once the PF reset is triggered on all ports of device. Firmware then
restores default configuration of some features like Tx Balancing,
which is not expected behavior.
This patch added an additional function parameter to indicate
driver unload.
Signed-off-by: Piotr Gardocki <piotrx.gardocki@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
For GTPoGRE, When setting the prot_id of prot, it should be
set to second inner.
Fixes: 34a0e7c44f ("net/ice/base: improve flow director masking")
Cc: stable@dpdk.org
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Added new tunnel type to support NvGRE
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add L2TPv2 (include PPP over L2TPv2) support for FDIR.
And add support PPPoL2TPv2oUDP with inner IPV4/IPV6/UDP/TCP for
FDIR.
The supported L2TPv2 packets are defined as below:
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_CONTROL
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP_IPV4
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP_IPV4_UDP
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP_IPV4_TCP
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP_IPV6
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP_IPV6_UDP
ICE_FLTR_PTYPE_NONF_IPV4_L2TPV2_PPP_IPV6_TCP
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_CONTROL
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP_IPV4
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP_IPV4_UDP
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP_IPV4_TCP
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP_IPV6
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP_IPV6_UDP
ICE_FLTR_PTYPE_NONF_IPV6_L2TPV2_PPP_IPV6_TCP
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add L2TPv2 session ID field support for RSS.
Enable L2TPv2 non-tunneled packet types for UDP protocol header
bitmaps.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>