E-switch tables one and above provide higher insertion rate
than table zero, as well as enhanced functionality.
This patch adds a mechanism to utilize these advantages, by creating
a default rule on port start, which directs all packets from e-switch
table zero to table one.
Other flow rules, requested for group n, will be created in
e-switch table n+1.
Jump action to e-switch group n will be created to group n+1.
Utility function mlx5_flow_group_to_table() is added to translate the
rte_flow group value to HW table value, and is called by PMD flow
engine on flow rule validation and creation.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When receiving the unsupported AQ messages, it's taken as an
error. It's not appropriate and triggers too much unnecessary print.
This commit is similar to
commit e130425300 ("net/i40e: downgrade unnecessary error log")
which made the same change for the PF instance.
Fixes: ae19955e7c ("i40evf: support reporting PF reset")
Cc: stable@dpdk.org
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Use rte_cio_wmb instead of rte_wmb when writing TX descriptor since it's
CIO memory.
Replace rte_io_wmb and E1000_PCI_REG_WRITE_RELAXED with
E1000_PCI_REG_WRITE since it has rte_io_wmb inside, which will be more
clear.
Fixes: 1fc9701238 ("net/e1000: fix i219 hang on reset/close")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
The mac types of i219 are e1000_pch_spt and e1000_pch_cnp, correct the
checking code of mac type when flushing i219 descriptor rings.
Fixes: 1fc9701238 ("net/e1000: fix i219 hang on reset/close")
Cc: stable@dpdk.org
Reported-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
The mlx5 PMD uses Netlink socket to communicate with Infiniband
devices kernel drivers to perform some control and setup operations.
The kernel drivers send the information back to the user mode
with Netlink messages which are processed in libnl callback routine.
This routine perform reply message (or set of messages) processing
and returned the processing result in ibindex field of provided
context structure (of mlx5_nl_ifindex_data type). The zero ibindex
value meant an error of reply message processing. It was found in
some configurations the zero is valid value for ibindex and error
was wrongly raised. To avoid this the new flags field is provided
in context structure, attribute processing flags are introduced
and these flags are used to decide whether no error occurred and
valid queried values are returned.
Fixes: e505508a38 ("net/mlx5: modify get ifindex routine for multiport IB")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch implements ethdev operations get_module_info and
get_module_eeprom, to support ethtool commands ETHTOOL_GMODULEINFO
and ETHTOOL_GMODULEEEPROM.
New functions mlx5_get_module_info() and mlx5_get_module_eeprom()
added in mlx5_ethdev.c.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds support for modifying the VID of the outermost VLAN
header already present in the packet.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds support for modifying the VLAN ID (VID) field
in an about-to-be-pushed VLAN header.
This feature can only modify the VID field of a new VLAN header yet
to be pushed. It does not support modifying an existing or already
pushed VLAN headers.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds support for modifying the VLAN priority (PCP) field
in about-to-be-pushed VLAN header.
This feature can only modify the PCP field of a new VLAN header yet
to be pushed. It does not support modifying an existing or already
pushed VLAN headers.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds support for RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN using
direct verbs flow rules.
If present in the flow, The VLAN default values are taken from the
VLAN item configuration.
In this commit only the VLAN TPID value can be set since VLAN
modification actions are not supported yet.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds support for RTE_FLOW_ACTION_TYPE_OF_POP_VLAN via
direct verbs flow rules.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds the mlx5dv VLAN push and pop commands to mlx5_glue
interface.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds a helper routine that supports searching for a
specific action in a list of actions.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
If VLAN tag insertion transmit offload is engaged
(DEV_TX_OFFLOAD_VLAN_INSERT in tx queue configuration is set)
the transmit descriptor may be built with wrong format, due to
packet length is not adjusted. Also, the ring buffer wrap up
is not handled correctly.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
event dpaa device support both ethernet as well as
crypto queues to be attached to it. eth_rx_adapter
provide infrastructure to attach ethernet queues and
crypto_adapter provide support for crypto queues.
This patch add support for dpaa_eventdev to attach
dpaa_sec queues.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
dpaa_sec hw queues can be attached to a hw dpaa event
device and the application can configure the event
crypto adapter to access the dpaa_sec packets using
hardware events.
This patch defines APIs which can be used by the
dpaa event device to attach/detach dpaa_sec queues.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
'cpt_logtype' & 'otx_cryptodev_driver_id' global variables are defined
in a header file which was causing multiple definitions of the
variables. Fixed it by moving the required vars to the .c file and
introducing a new macro so the CPT_LOG macros in common/cpt would use
the associated PMD log var.
Issue has been detected by '-fno-common' gcc flag.
Fixes: bfe2ae495e ("crypto/octeontx: add PMD skeleton")
Cc: stable@dpdk.org
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
It's better to allocate device private data on the same NUMA node with
device, rather than with the main thread. This helps avoid cross-NUMA
access for worker thread.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Jay Zhou <jianjay.zhou@huawei.com>
To allow shared library builds of e.g. test-bbdev app, we need to export
the configure function. Since this needs to be exported as experimental by
default, we update the header file to add the experimental tag there too.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Now that all driver names follow a consistent pattern, remove the override
of the name in each driver which adds the prefix. Instead we can just add
the prefix at a higher level.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The fpga_lte_fec is the only bbdev driver that does not use bbdev in the
name, so modify it to keep consistency with the other bbdev drivers. This
will then allow later simplification due to all drivers using the same
basic naming format.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
For baseband drivers, the macros used to indicate the presence of a
particular driver were subtly different from that used in make. The make
values hand "PMD" before the individual driver name, while in meson it came
afterwards. Update meson to put the "PMD" part first.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch improves the performance of AES GCM by using
the Single Pass Crypto Request functionality when running
on GEN3 QAT. Falls back to the classic 2-pass mode on older
hardware.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch adds few definitions specific to GEN3 QAT.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
With the latest firmware, there are few changes for zuc and snow3g.
1. The iv_source is present in bitfield 7 of minor opcode. In the
old firmware this was present in bitfield 6.
2. Algorithm type is a 2 bit field in new firmware. In the old
firmware it was named as cipher type and it was a 1 bit field.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
unnecessary debug logs in data path are removed
and hardware debug logs are compiled off.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
In cases where single cryptodev is used by multiple cores
using multiple queues, there will be contention for mempool
resources and may eventually get exhausted.
Basically, mempool should be defined per core.
Now since qp is used per core, mempools are defined in qp setup.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch allocate/clean the SEC context dynamically
based on the number of SG entries in the buffer.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
DPAA2_SEC hardware can support any number of SG entries.
This patch allocate as many SG entries as are required.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch add support for ZUC Encryption and ZUC Authentication.
Before passing to CAAM, the 16-byte ZUCA IV is converted to 8-byte
format which consists of 38-bits of count||bearer|direction.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add Kasumi processing for non PDCP proto offload cases.
Also add support for pre-computed IV in Kasumi-f9
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Adding basic framework to use snow3g f8 and f9 based
ciphering or integrity with direct crypto apis.
This patch does not support any combo usages yet.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add support for snow-f9 in non pdcp protocol offload mode.
This essentially add support to pass pre-computed IV to SEC.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch add support for non-protocol offload mode
of snow-f8 algo
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch support SNOW-SNOW (enc-auth) 18bit PDCP case
for devices which do not support PROTOCOL command
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch support ZUC-ZUC PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch support AES-AES PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
PDCP descriptors in some cases internally use commands which overwrite
memory with extra '0s' if write-safe is kept enabled. This breaks
correct functional behavior of PDCP apis and they in many cases give
incorrect crypto output. There we disable 'write-safe' bit in FLC for
PDCP cases. If there is a performance drop, then write-safe would be
enabled selectively through a separate patch.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch add support for chained input or output
mbufs for PDCP and ipsec protocol offload cases.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
For sec era 8, NULL auth using protocol command does not add
4 bytes of null MAC-I and treat NULL integrity as no integrity which
is not correct.
Hence converting this particular case of null integrity on 12b SN
on SEC ERA 8 from protocol offload to non-protocol offload case.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Adding minimal support for CAAM HW era 10 (used in LX2)
Primary changes are:
1. increased shard desc length form 6 bit to 7 bits
2. support for several PDCP operations as PROTOCOL offload.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Per packet HFN override is supported in NXP PMDs
(dpaa2_sec and dpaa_sec). DPOVRD register can be
updated with the per packet value if it is enabled
in session configuration. The value is read from
the IV offset.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
PDCP u-plane may optionally support integrity as well.
This patch add support for supporting integrity along with
confidentiality.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Added support for 12-bit c-plane. We implement it using 'u-plane for RN'
protocol descriptors. This is because 'c-plane' protocol descriptors
assume 5-bit sequence numbers. Since the crypto processing remains same
irrespective of c-plane or u-plane, we choose 'u-plane for RN' protocol
descriptors to implement 12-bit c-plane. 'U-plane for RN' protocol
descriptors support both confidentiality and integrity (required for
c-plane) for 7/12/15 bit sequence numbers.
For little endian platforms, incorrect IV is generated if MOVE command
is used in pdcp non-proto descriptors. This is because MOVE command
treats data as word. We changed MOVE to MOVEB since we require data to
be treated as byte array. The change works on both ls1046, ls2088.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds the stateful decompression feature
to the DPDK QAT PMD.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch adds QAT RAM bank definitions and related macros.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
outer IP header is formed at the time of session initialization
using the ipsec xform. This outer IP header will be appended by
hardware for each packet.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
outer IP header is formed at the time of session initialization
using the ipsec xform. This outer IP header will be appended by
hardware for each packet.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Like for Ethernet ports, the OCTEON TX crypto engines must
first be unbound from their kernel module, then rebound to
vfio-pci, before being used in DPDK.
As this capability is detected at runtime by dpdk-pmdinfo,
add the info in the PMD registering directives.
Then an external script can be used for bind and unbind.
Fixes: bfe2ae495e ("crypto/octeontx: add PMD skeleton")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
dpaa_sec needs translations between physical and virtual addresses.
V to P translation is relatively fast, as memory is managed in
contiguous segments.
The result of each V to P translation is used to update the DPAA iova
table, which should be updated by a Mem event callback, but is not.
Then the DPAA iova table has entries for all needed memory ranges.
With this patch, dpaa_mem_ptov will always use dpaax_iova_table_get_va,
which ensures optimal performance.
Fixes: 5a7dbb934d ("dpaa: enable dpaax library")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Up to version 0.52 of the IPSec Multi buffer library,
the chain order for AES-CCM was CIPHER_HASH when encrypting.
However, after this version, the order has been reversed in the library
since, when encrypting, hashing is done first and then ciphering.
Therefore, order is changed to be compatible with newer versions
of the library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
If the application enables the use of ESN in the
ipsec_xform for security session create, pdb options
are set for enabling ESN.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Virtual to physical conversions are optimized using the
DPAAX tables. This patch integrates DPAAX with caam_jr PMD.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Reducing the functional traces from data path and critical session path
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch shortens the queue pair name created when initializing
the queue pair of the ISAL PIM, based on the device and qp ids.
Suggested-by: Paul Luse <paul.e.luse@intel.com>
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Just open the sysfs file and handle failure, rather than using access().
This eliminates Coverity warnings about TOCTOU
"time of check versus time of use"; although for this sysfs file that is
not really an issue anyway.
Coverity issue: 347276
Fixes: 54a328f552 ("bus/pci: forbid IOVA mode if IOMMU address width too small")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Restrict this header inclusion to its real users.
Fixes: 028669bc9f ("eal: hide shared memory config")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
With VERBOSE=1, this error was seen in debug mode with gcc 9.1:
In file included from /tmp/dpdk.auto-config-h.sh.c.w0VWMi:1:
In file included from rdma-core/build/include/infiniband/mlx5dv.h:47:
In file included from rdma-core/build/include/infiniband/verbs.h:46:
In file included from rdma-core/build/include/infiniband/verbs_api.h:66:
In file included from rdma-core/build/include/infiniband/ib_user_ioctl_verbs.h:38:
include/rdma/ib_user_verbs.h:161:28: fatal error:
zero size arrays are an extension [-Wzero-length-array]
__aligned_u64 driver_data0;
^
1 error generated.
As a result, buildtools/auto-config-h.sh was not generating
a correct autoconf file, so the compilation was generating such error:
fatal error: redefinition of 'mlx5_ib_uapi_flow_action_packet_reformat_type'
It is fixed by disabling -pedantic option when calling auto-config-h.sh
from the makefile-based system.
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Matan Azrad <matan@mellanox.com>
If rdma-core is not installed in a standard directory of the system,
it is possible to specify the location of the pkgconfig file via
an environment variable:
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:~/rdma-core/build/lib/pkgconfig
In this case, the dependency may become mandatory to specify
for the configuration tests (checking dependency symbols or fields).
Some spacing is also fixed around.
Fixes: 8e49376400 ("net/mlx4: add external allocator for Verbs object")
Fixes: 1dd7c7e38c ("net/mlx4: support meson build")
Fixes: 96d7c62a70 ("net/mlx5: support meson build")
Cc: stable@dpdk.org
Suggested-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Matan Azrad <matan@mellanox.com>
Some drivers were missing reasons text for their disabling in meson.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Luca Boccassi <bluca@debian.org>
As explained in drivers/meson.build,
"
For the find_library() case (but not with dependency()) we also
need to specify the "-l" flags in pkgconfig_extra_libs variable
too, so that it can be reflected in the pkgconfig output for
static builds.
"
The commit e30b4e566f ("build: improve dependency handling")
must be followed up with this one in order to remove more
occurences of pkgconfig_extra_libs redundant with use of dependency().
Fixes: f1debd77ef ("net/af_xdp: introduce AF_XDP PMD")
Fixes: 3c32e89f68 ("compress/isal: add skeleton ISA-L compression PMD")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Luca Boccassi <bluca@debian.org>
The simple Tx path in virtio has been removed in below commit.
This patch removes an undefined function declaration of simple
Tx and all related descriptions in the doc.
Fixes: 57f818963d ("net/virtio: remove simple Tx path")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Constant MLX5_GROUP_FACTOR is defined with value 1, and used to
multiply group value in two places.
This patch removes the unneeded constant definition and use.
Fixes: 4f84a19779 ("net/mlx5: add Direct Rules API")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This adds support for adding a new UDP tunnel port
on a specific VXLAN types.
Currently we only support VXLAN, VXLAN-GPE on ports
4789, 4790 respectively. Without having to configure
anything in the NIC.
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch updates the validation function of jump action.
It adds check of conflicting fate actions in flow rule.
It also removes check of action->type which is not needed.
Fixes: 684b9a1b1f ("net/mlx5: support jump action")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_flow_validate_action_drop() checks if another fate
action is already present in this flow rule, using defined value
MLX5_FLOW_FATE_ACTIONS.
This patch enhances the check using value
(MLX5_FLOW_FATE_ACTIONS | MLX5_FLOW_FATE_ESWITCH_ACTIONS)
to make sure all relevant fate actions are checked.
Fixes: 23c1d42c71 ("net/mlx5: split flow validation to dedicated function")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
In struct mlx5_ibv_shared, member esw_drop_action was added between
existing member tx_tbl and the comment line describing it.
This patch moves the comment line to its original location, and fixes
a typo in the comment.
Fixes: 34fa7c0268 ("net/mlx5: add drop action to Direct Verbs E-Switch")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The hw ptype information is missed for IP-in-IP tunnel.
It should be RTE_PTYPE_TUNNEL_IP ptype.
Fixes: 5e33bebdd8 ("net/mlx5: support IP-in-IP tunnel")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
x550 NIC VF port has the capacity to set RSS hash,
so device info get function should report that.
Fixes: 2144f6630f ("ixgbe: add redirection table size in device info")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
A new mail box version of ixgbe_mbox_api_13 need to be enabled for PF
host, in order that it can communicate with VF for queue number.
Fixes: 46b14c2311 ("ixgbe: get VF queue number")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Interrupt mode is not working on X552/557 device because
this device doesn't enable the queue interrupt mapping,
this patch fixed the issue.
Fixes: d2e72774e5 ("ixgbe/base: support X550")
Cc: stable@dpdk.org
Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The vmxnet3_prep_pkts function set rte_errno to ENOTSUP for any packets
having PKT_TX_IP_CHECKSUM set in ol_flags. But the vmxnet3 has
DEV_TX_OFFLOAD_IPV4_CKSUM set in this tx offload capa.
This issue has been introduced with the new Rx offload
API. DEV_TX_OFFLOAD_IPV4_CKSUM and DEV_RX_OFFLOAD_IPV4_CKSUM has been
added to the tx/rx offloads capa, but the vmxnet3 driver doesn't support
it.
To fix the issue, DEV_TX/RX_OFFLOAD_IPV4_CKSUM needs to be removed from
tx/rx offload capa.
Fixes: 95e4a96ccb ("net/vmxnet3: convert to new Rx offload API")
Cc: stable@dpdk.org
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Since "link.link_duplex" has been assigned to ETH_LINK_FULL_DUPLEX just
before switch statement, the assignment in switch-case statement is
redundant. Remove it.
Fixes: 82113036e4 ("ethdev: redesign link speed config")
Cc: stable@dpdk.org
Signed-off-by: Yong Wang <wang.yong19@zte.com.cn>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
vPMD for aarch64 calculates the number of received packets using a loop.
Change to use NEON intrinsics for calculation. This saves CPU cycles
and has slightly better performance.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
The memory barrier was intended for descriptor data integrity (see
comments in [1]). As later NEON loads were implemented and a whole
entry is loaded in one-run and atomic, that makes the ordering of
partial loading unnecessary. Remove it accordingly.
Corrected couple of code comments.
In terms of performance, observed slightly higher average throughput
in tests with 82599ES NIC.
[1] http://patches.dpdk.org/patch/18153/
Fixes: 989a840505 ("net/ixgbe: fix received packets number for ARM NEON")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
As packet length extraction code was simplified,the ordering
was not necessary any more.[1]
2% performance gain was measured on Marvell ThunderX2.
4.3% performance gain was measured on Ampere eMAG80
[1] http://mails.dpdk.org/archives/dev/2016-April/037529.html
Fixes: ae0eb310f2 ("net/i40e: implement vector PMD for ARM")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
For x86, the descriptors needs to be loaded in order, so in between two
descriptors loading, there is a compiler barrier in place.[1]
For aarch64, a patch [2] is in place to survive with discontinuous DD
bits, the barriers can be removed to take full advantage of out-of-order
execution.
50% performance gain in the RFC2544 NDR test was measured on ThunderX2.
12.50% performance gain in the RFC2544 NDR test was measured on Ampere
eMAG80 platform.
[1] http://inbox.dpdk.org/users/039ED4275CED7440929022BC67E7061153D71548@
SHSMSX105.ccr.corp.intel.com/
[2] https://mails.dpdk.org/archives/stable/2017-October/003324.html
Fixes: ae0eb310f2 ("net/i40e: implement vector PMD for ARM")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
The AltiVec header file breaks boolean type:
error: incompatible types when initializing type
'__vector _bool int' {aka '_vector(4) __bool int'} using type 'int'
If __APPLE_ALTIVEC__ is defined, then bool type is redefined
and conflicts with stdbool.h.
There is no good solution to fix it for the whole project without
breaking something else, so a workaround is inserted in mlx5 PMD.
This workaround is not compatible with C++ but there is no C++ in DPDK.
Related to: 725f5dd0bf ("net/mlx5: fix build on PPC64")
Cc: stable@dpdk.org
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Matan Azrad <matan@mellanox.com>
atl_dev_info_get() is declared twice in atl_ethdev.c.
Delete one of these declarations.
Fixes: bb42aa9ffe ("net/atlantic: configure device start/stop")
Cc: stable@dpdk.org
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Igor Russkikh <igor.russkikh@aquantia.com>
Add multiple processes support for ice, secondary processes will share
memory and configuration with primary process, do not need further
initialization for secondary processes.
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The rte_eth_dev_close() function now handles freeing resources for
devices (e.g., mac_addrs). To conform with the new close() behaviour we
are asserting the RTE_ETH_DEV_CLOSE_REMOVE flag so that
rte_eth_dev_close() releases all device level dynamic memory.
Signed-off-by: Rastislav Cernay <cernay@netcope.com>
Acked-by: Jan Remes <remes@netcope.com>
The rte_eth_dev_close() function now handles freeing resources for
devices (e.g., mac_addrs). To conform with the new close() behaviour we
are asserting the RTE_ETH_DEV_CLOSE_REMOVE flag so that
rte_eth_dev_close() releases all device level dynamic memory.
Signed-off-by: Rastislav Cernay <cernay@netcope.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Jan Remes <remes@netcope.com>
af_packet driver is leaving stale socket after device is removed.
Ring buffers are memory mapped when device is added using rte_dev_probe.
There is no corresponding munmap call when device is removed/closed.
This commit fixes the issue by calling munmap
from rte_pmd_af_packet_remove().
Bugzilla ID: 339
Fixes: e6ee4db01b ("af_packet: make the device detachable")
Cc: stable@dpdk.org
Signed-off-by: Abhishek Sachan <abhishek.sachan@altran.com>
Reviewed-by: John W. Linville <linville@tuxdriver.com>
Timestamp is always set in PCAP header, whether it reads a file or
listen on an interface. This information can be important for some
applications and it cannot be obtained otherwise (especially when
reading a PCAP file, where the timestamp is not the current time).
Timestamp here is the number of microseconds since epoch.
Signed-off-by: Sylvain Rodon <srn@nexatech.fr>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The generic RTE_LOGTYPE_PMD is a historical relic and should
be deprecated. Every driver must register its own logtype.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The generic RTE_LOGTYPE_PMD is a historical relic and should
be deprecated. Every driver must register its own logtype.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Replace the boilerplate BSD license text with the equivalent
SPDX license id.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tetsuya Mukawa <mtetsuyah@gmail.com>
Acked-by: Takanari Hayama <taki@igel.co.jp>
Fix some mistakes in Tx bursts in regard to TSO flag check.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
For the rte_flow_action_raw_encap, the header definition for
encapsulation must be available, otherwise it will lead to crash on some
OFED versions and logically it should be rejected.
Fixes: 8ba9eee4ce ("net/mlx5: add raw data encap/decap to Direct Verbs")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Dekel Peled <dekelp@mellanox.com>
The ops pointers in fm10k_stats_get() are set up from primary
process, when secondary process calls these ops pointers,
a segment fault will happen.
Fixes: 7223d200c2 ("fm10k: add base driver")
Cc: stable@dpdk.org
Signed-off-by: Lu Qiuwen <luqiuwen@iie.ac.cn>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
I40evf queue can not work properly with kernel pf driver for X722 vf.
Eg. when configure 8 queues pair, only 4 queues can receive packets,
and half packets will be lost if using 2 queues pair.
This issue is caused by misconfiguration of look up table, the original
code of LUT configuration did not work for X722 vf, use aq command to
setup the LUT to make it work properly.
Fixes: cea7a51c17 ("i40evf: support RSS")
Cc: stable@dpdk.org
Acked-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Rather than the global promiscuous mode on all slaves, let's try to use
allmulti.
To do this, we apply the same mechanism than for promiscuous and drop in
rx_burst.
When adding a slave, we first try to use allmulti, else we try
promiscuous.
Because of this, we must be able to handle allmulti on the bonding
interface, so the associated dev_ops is added with checks on which mode
has been applied on each slave.
Rather than add a new flag in the bond private structure, we can remove
promiscuous_en since ethdev already maintains this information.
When starting the bond device, there is no promisc/allmulti re-apply
as, again, ethdev does this itself.
On reception, we must inspect if the packets are multicast or unicast
and take a decision to drop based on which mode has been enabled on the
bonding interface.
Note: in the future, if we define an API so that we can add/remove mc
addresses to a port (rather than the current global set_mc_addr_list
API), we can refine this to only register the LACP multicast mac
address.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
By default, the 802.3ad code enables promisc mode on all slaves.
To avoid all packets going to the application (unless the application
asked for promiscuous mode), all frames are supposed to be filtered in
the rx burst handler.
However the incriminated commit broke this because non pure ethernet
frames (basically any unicast Ether()/IP() packet) are not filtered
anymore.
Fixes: 71b7b37ec9 ("net/bonding: use ptype flags for LACP Rx filtering")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
Fast queue Rx burst function is missing checks on promisc and the
slave collecting state.
Define an inline wrapper to have a common base.
Fixes: 112891cd27 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
We'd better consolidate the fast queue and the normal tx burst functions
under a common inline wrapper for maintenance.
But looking closer at the bufs_slave_port_idxs[] mapping array in those
tx burst functions, its size is invalid since up to nb_bufs are handled
here.
A previous patch [1] fixed this issue for balance tx burst function
without mentioning it.
802.3ad and balance modes are functionally equivalent on transmit.
The only difference is on the slave id distribution.
Add an additional inline wrapper to consolidate even more and fix this
issue.
[1]: https://git.dpdk.org/dpdk/commit/?id=c5224f623431
Fixes: 09150784a7 ("net/bonding: burst mode hash calculation")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
This reverts commit aa2c00702b, which
introduced the possibility of an invalid address exception when running
an application with a stopped receive queue. The issues with rxq stop/start
will be revisited in the 19.11 release timeframe.
Fixes: aa2c00702b ("net/bnxt: fix traffic stall on Rx queue stop/start")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The "and" condition offset == 0 && offset == NVM_INVALID_PTR
can never be true.
Fixes: cf3af5aa56 ("net/ixgbe/base: add functions to get version info")
Cc: stable@dpdk.org
Signed-off-by: Congwen Zhang <zhang.congwen@zte.com.cn>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
The copying of sent mbufs pointers might be deferred to the end of
tx_burst() routine to be copied in one call of rte_memcpy.
For the multi segment packets this optimization is not applicable,
because number of packets does not match with number of mbufs and
we do not have linear array of pointers in pkts parameter.
The completion request generating routine wrongly took into account
the inconsistent (for multi-segment packets) deferred pointer copying.
Fixes: 5a93e173b8 ("net/mlx5: fix Tx completion request generation")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
gcc-9 is stricter on NULL arguments for printf.
Fix the following build error by avoiding NULL argument to printf.
In file included from drivers/net/memif/memif_socket.c:26:
In function 'memif_socket_create',
inlined from 'memif_socket_init' at net/memif/memif_socket.c:965:12:
net/memif/rte_eth_memif.h:35:2: error:
'%s' directive argument is null [-Werror=format-overflow=]
35 | rte_log(RTE_LOG_ ## level, memif_logtype, \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
36 | "%s(): " fmt "\n", __func__, ##args)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 09c7e63a71 ("net/memif: introduce memory interface PMD")
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
The shared Infiniband device context should be included
into memory event callback list only once on context creation,
and removed from the list only once on context destroying.
Multiple insertions of the same object caused the infinite
loop on the list processing.
Fixes: ccb3815346 ("net/mlx5: update memory event callback for shared context")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The RSS expand function for IP-in-IP tunnel type is missed,
which leads to create following flow failed:
flow create 0 ingress pattern eth / ipv4 proto is 4 /
ipv4 / udp / end actions rss queues 0 1 end level 2
types ip ipv4-other udp ipv4 ipv4-frag end /
mark id 221 / count / end
In order to make RSS expand function working correctly,
now the way to check whether a IP tunnel existing is to
check whether there is the second IPv4/IPv6 item and whether the
first IPv4/IPv6 item's next protocl is IPPROTO_IPIP/IPPROTO_IPV6.
For example:
... pattern eth / ipv4 proto is 4 / ipv4 / ....
Fixes: 5e33bebdd8 ("net/mlx5: support IP-in-IP tunnel")
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function flow_dv_zero_encap_udp_csum() uses a while loop to iterate
over vlan items in flow rule.
Pointer next_hdr is incremented to the next item before it is used,
so the first item is skipped.
This patch moves the incrementing of next_hdr to the correct place.
Fixes: bf1d7d9a03 ("net/mlx5: zero out UDP checksum in encapsulation")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When the link is down, the link speed returned by ethtool is
UINT32_MAX and the link status is 0.
In this case, the DPDK ethdev link speed should be set to
ETH_SPEED_NUM_NONE.
Otherwise since link speed is non-zero but link status is zero, this
is an inconsistent situation and -EAGAIN is returned, which is not right.
Fixes: 1884087198 ("net/mlx5: fix support for newer link speeds")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
There is the limit on completion descriptor fetch to improve
latency. If burst size is large there might be not enough
resources freed in completion processing. This fix reiterates
sending loop and allows multiple completion descriptor fetch
and processing.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch fixes the default settings for packet size to inline
with Enhanced Multi-Packet Write feature, allowing 256B packets
to be inlined with Out-Of-the-Box settings.
Fixes: 50724e1bba ("net/mlx5: update Tx definitions")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
If the minimal inline data are required the data inline feature
must be engaged. There were the incorrect settings enabling the
entire small packet inline (in size up to 82B) which may result
in sending rate declining if there is no enough cores. The same
problem was raised if inline was enabled to support VLAN tag
insertion by software.
Fixes: 38b4b397a5 ("net/mlx5: add Tx configuration and setup")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The completion loop speed optimizations for error-free
operations are done - no CQE field fetch on each loop
iteration. Also, code size is oprimized - the flush
buffers routine is invoked once.
Fixes: 318ea4cfa1 ("net/mlx5: fix Tx completion descriptors fetching loop")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The debug assert wrongly triggers on the inline data 18B,
this should be passed successfully.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The patch [Fixes] sets the default value of required minimal
inline data to 0 bytes. On some configurations (depends
on switchdev/legacy settings and FW version/settings)
the ConnectX-4LX NIC requires minimal 18 bytes of
Tx descriptor inline data to operate correctly.
Wrongly set to 0 default value may prevent NIC from operating
with out-of-the-box settings, this patch reverts default
value for ConnectX-4LX back to 18 bytes (inline L2).
Fixes: 9f350504bb ("net/mlx5: fix ConnectX-4LX minimal inline data limit")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The rte_flow_item_vlan has the inner_type, which is missing on
DR/DV flow engine.
By adding this support, the example testpmd commands could be:
- matching all vlan traffic with id 2:
testpmd> flow create 0 ingress pattern eth / vlan vid is 2 / end
actions queue index 2 / end
- matching all ipv4 traffic in vlan with id 2:
testpmd> flow create 0 ingress pattern eth / vlan vid is 2
inner_type is 0x0800 / end actions queue index 2 / end
Fixes: fc2c498ccb ("net/mlx5: add Direct Verbs translate items")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Some flow rules were not configured.
Part of the code in function flow_dv_matcher_enable() is enclosed in
'#ifdef HAVE_MLX5DV_DR' preprocessor directive.
Using this directive is not needed here, and prevents compilation of
relevant code.
This patch removes the unnecessary preprocessor directive.
Fixes: 4f84a19779 ("net/mlx5: add Direct Rules API")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_flow_validate_item_vlan() validates the user setting
is supported by NIC, using a mask with TCI mask 0x0fff.
This check will reject a flow rule specifying a vlan pcp item.
This patch updates mlx5_flow_validate_item_vlan() to use mask 0xffff,
so flow rules with vlan pcp item are accepted.
Fixes: 23c1d42c71 ("net/mlx5: split flow validation to dedicated function")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
mlx4_dev_info_get calls mlx4_get_ifname, but mlx4_get_ifname
uses priv->ctx which is not a valid pointer in a secondary
process. The fix is to cache the value in primary.
In the primary process, get and store the interface index of
the device so that secondary process can see it.
Bugzilla ID: 320
Fixes: 61cbdd4194 ("net/mlx4: separate device control functions")
Cc: stable@dpdk.org
Reported-by: Suyang Ju <sju@paloaltonetworks.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Matan Azrad <matan@mellanox.com>
The function mlx5_set_min_inline() includes a switch() that checks
various PCI device IDs in order to set the txq_inline_min value. No
value is set when the PCI device ID matches the ConnectX-5 adapters,
resulting in an assert() failure later in the function
mlx5_set_txlimit_params().
This error was encountered on an IBM Power 9 system running RHEL 7.6
w/o Mellanox OFED installed.
Fixes: 38b4b397a5 ("net/mlx5: add Tx configuration and setup")
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
MLX5 PMD limits the number of SW steering tables to 32.
This patch updates the limit to 65535, to allow wide range of values.
Fixes: e2b4925ef7 ("net/mlx5: support Direct Rules E-Switch")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
On some virtual setups (particularly on ESXi) when we have SR-IOV and
E-Switch enabled there is the problem to receive VLAN traffic on VF
interfaces. The NIC driver in ESXi hypervisor does not setup E-Switch
vport setting correctly and VLAN traffic targeted to VF is dropped.
The patch provides the temporary workaround - if the rule
containing the VLAN pattern is being installed for VF the VLAN
network interface over VF is created, like the command does:
ip link add link vf.if name mlx5.wa.1.100 type vlan id 100
The PMD in DPDK maintains the database of created VLAN interfaces
for each existing VF and requested VLAN tags. When all of the RTE
Flows using the given VLAN tag are removed the created VLAN interface
with this VLAN tag is deleted.
The name of created VLAN interface follows the format:
evmlx.d1.d2, where d1 is VF interface ifindex, d2 - VLAN ifindex
Implementation limitations:
- mask in rules is ignored, rule must specify VLAN tags exactly,
no wildcards (which are implemented by the masks) are allowed
- virtual environment is detected via rte_hypervisor() call,
and the type of hypervisor is checked. Currently we engage
the workaround for ESXi and unrecognized hypervisors (which
always happen on platforms other than x86 - it means workaround
applied for the Flow over PCI VF). There are no confirmed data
the other hypervisors (HyperV, Qemu) need this workaround,
we are trying to reduce the list of configurations on those
workaround should be applied.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343416
Fixes: fe65e1e1ce ("fm10k: add vector scatter Rx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343447
Fixes: 319c421f38 ("net/avf: enable SSE Rx Tx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343422, 343403
Fixes: ca74903b75 ("net/i40e: extract non-x86 specific code from vector driver")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343452, 343407
Fixes: c68a52b8b3 ("net/ice: support vector SSE in Rx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 13245
Fixes: 8a44c15aa5 ("net/ixgbe: extract non-x86 specific code from vector driver")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add return value checking when reading configure information from PCI
register to avoid Coverity issue.
Fixes: 1fc97012 ("net/e1000: fix i219 hang on reset/close")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Given the fact that dev parameter is used in ice_dev_configure.
Fixes: 50370662b7 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When working as a secondary process, it uses eth_memif_rx in PMD egress.
It should be eth_memif_tx.
Fixes: c41a04958b ("net/memif: support multi-process")
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Fixed a check in bnxt_alloc_hwrm_rx_ring() while initializing
the rx ring.
Driver should not change "deferred_start" status of rx/tx queues.
It should get the status in queue_setup_op() and use that value.
Fixes: 9b63c6fd70 ("net/bnxt: support Rx/Tx queue start/stop")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Octeontx2 PMD's mailbox client uses device memory to send messages
to mailbox server in the admin function Linux kernel driver.
The device memory used for the mailbox communication needs to
be qualified as volatile memory type to avoid unaligned device
memory accesses because of compiler's memory access coalescing.
This patch modifies the mailbox request and responses as volatile
type which were non-volatile earlier and accessed from unaligned
memory addresses which resulted in bus errors on Fedora 30 with
gcc 9.1.1.
Fixes: 2b71657c86 ("common/octeontx2: add mbox request and response definition")
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Since 18.11, it is suggested that driver should release all its private
resources at the dev_close routine. So all resources previously released
in remove routine are now released at the dev_close routine, and the
dev_close routine will be called in driver remove routine in order to
support removing a device without closing its ports.
Above behavior changes are supported by setting RTE_ETH_DEV_CLOSE_REMOVE
flag during probe stage.
Signed-off-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Yuri Chipchev <yuric@marvell.com>
Fix the PCIe detach segfault by releasing eth_dev resources
by adding nicvf cleanup support on PCI detach.
Fixes: fdf91e0f2f ("drivers/net: do not use ethdev driver")
Cc: stable@dpdk.org
Signed-off-by: Amit Gupta <agupta3@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Update workaround changes for erratas that are fixed on 96xx A1.
This patch also enables cq drop for all the passes for
maintaining performance along with updating a default
Rx ring size in dev_info.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Patch extends minimum supported max_sqb_count devarg value
such that it can limit the max sqb count to 8 buffers and
also defines NIX_DEF_SQB and uses it to compute the number
of sqe buffers required for the egress traffic.
NIX_DEF_SQB is defined as 16 which is optimal across multiple
octeontx2 platforms to scale up the performance proportional
to the corresponding port/queue to lcore mappings.
Fixes: fb0198b7dc ("net/octeontx2: add devargs parsing functions")
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
From B0 HW revision onwards, HW can drop the Rx and L2 error packets.
Enable this by default if the feature is available.
Since this bit field is used as reserved in old HW revisions,
No need to have additional HW version check.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
During an if-condition evaluation, a 2-bit flag evaluates to 'true' for
'0x1', '0x2' and '0x3'. Thus, from this perspective these flags are
indistinguishable. To make them distinct, respective bits must be
extracted with a mask and then checked for strict equality.
Specifically here, even if `PKT_TX_UDP_CKSUM` (value '0x3') was set, the
expression `mbuf->ol_flags & PKT_TX_TCP` (the second flag of value
'0x1') is evaluated first and the result is 'true'. In consequence, for
UDP packets the execution flow enters an incorrect branch.
Fixes: 56b8b9b7e5 ("net/ena: convert to new Tx offloads API")
Cc: stable@dpdk.org
Reported-by: Eduard Serra <eserra@vmware.com>
Signed-off-by: Maciej Bielski <mba@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
When using RTE_PKTMBUF_HEADROOM as 0, dpaa driver throws compilation error
error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
This patch change it into run-time check.
Bugzilla ID: 335
Fixes: beb2a7865d ("bus/fslmc: define hardware annotation area size")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
When using RTE_PKTMBUF_HEADROOM as 0, dpaa driver throws compilation error
error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
This patch change it into run-time check.
Bugzilla ID: 335
Fixes: ff9e112d78 ("net/dpaa: add NXP DPAA PMD driver skeleton")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
There should not be blank lines at end of files.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
IOMMU capabilities won't change and must be checked even if no PCI device
seem to be supported yet when EAL initialised.
This is to accommodate with SPDK that registers its drivers after
rte_eal_init(), especially on PPC platform where the IOMMU does not
support VA.
Fixes: 703458e19c ("bus/pci: consider only usable devices for IOVA mode")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Takeshi Yoshimura <tyos@jp.ibm.com>
This macro is unused after a previous fix.
Fixes: fe822eb8c5 ("bus/pci: use IOVA DMA mask check when setting IOVA mode")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Fixed to return the checksum status of rx packets by setting
"ol_flags" correctly in vector mode receive.
These changes have been there for non vector mode receive.
In vector mode receive also indicate inner and outer checksum
errors individually in "ol_flag" to indicate L3 and L4 error.
Fixes: bc4a000f2f ("net/bnxt: implement SSE vector mode")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
There is a bug in context memory allocation because of which
it results in reusing the context memory allocated for the first
port while allocating memory for next ports.
Fix it by passing the port id in the name field while
allocating context memory.
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Add extern to variable declaration to avoid some compiler treating it
as variable definition.
build error log:
lib/librte_pmd_virtio.a(vhost_kernel.o):(.rodata+0x110):
multiple definition of `vhost_msg_strings'
lib/librte_pmd_virtio.a(vhost_user.o):(.data.rel.ro.local+0x0):
first defined here
lib/librte_pmd_virtio.a(virtio_user_dev.o):(.rodata+0xe8):
multiple definition of `vhost_msg_strings'
lib/librte_pmd_virtio.a(vhost_user.o):(.data.rel.ro.local+0x0):
first defined here
Fixes: 33d24d65fe ("net/virtio-user: abstract backend operations")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The driver names for rawdevs were both different in make and meson builds
and were non-standard in the make version in that some included "rawdev" in
the name while others didn't.
Therefore, for global consistency of naming, we can use "rte_rawdev" rather
than "rte_pmd" for the prefix for the libraries. While most other driver
categories use "rte_pmd" as a prefix, there is precedent for this in the
mempool drivers use "rte_mempool" as a prefix.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The ifpga and skeleton rawdev drivers included "rawdev" in their directory
names, which was superfluous given that they were in the drivers/raw
directory. Shorten the names via this patch.
For meson builds, this will rename the final library .so/.a files
produced, but those will be renamed again later via a patch to
standardize rawdev names.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
OTX2 AP core can sometimes fissure STP instructions when it is more
optimal to send such writes into the pipeline as 2 separate
instructions. However registers should be excluded from such
optimization. This commit ensures that no CSR write is ever fissured
by introducing zero cost workaround by setting STP pre-index by zero to
make sure OTX2 AP core prevent fissure.
Fixes: 8a4f835971 ("common/octeontx2: add IO handling APIs")
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
In case of QINT interrupt occurrence, SW fails to clear the QINT
line resulting in recursive interrupts because currently interrupt
handler gets the cause of the interrupt by reading
NIX_LF_RQ[SQ/CQ/AURA/POOL]_OP_INT but does not write 1 to clear
RQ[SQ/CQ/ERR]_INT field in respective NIX_LF_RQ[SQ/CQ/AURA/POOL]_OP_INT
registers.
Fixes: dc47ba15f6 ("net/octeontx2: handle queue specific error interrupts")
Fixes: 50b95c3ea7 ("mempool/octeontx2: add NPA IRQ handler")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The packets transmitting in mlx5 is performed by building
Tx descriptors (WQEs) and sending last ones to the NIC.
The descriptor can contain the special flags, telling the NIC
to generate Tx completion notification (CQEs). At the beginning
of tx_burst() routine PMD checks whether there are some Tx
completions and frees the transmitted packet buffers.
The flags to request completion generation must be set once
per specified amount of packets to provide uniform stream
of completions and freeing the Tx queue in uniform fashion.
The previous implementation sets the completion request
generation once per burst, if burst size if big enough it may
latency in CQE generation and freeing large amount of buffers
in tx_burst routine on multiple completions which also
affects the latency and even causes the Tx queue overflow
and Tx drops.
This patches enforces the completion request will be set
in the exact Tx descriptor if specified amount of packets
is already sent.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Mellanox ConnectX-4LX NIC in configurations with disabled
E-Switch can operate without minimal required inline data
into Tx descriptor. There was the hardcoded limit set to
18B in PMD, fixed to be no limit (0B).
Fixes: 38b4b397a5 ("net/mlx5: add Tx configuration and setup")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch limits the amount of fetched and processed
completion descriptors in one tx_burst routine call.
The completion processing involves the buffer freeing
which may be time consuming and introduce the significant
latency, so limiting the amount of processed completions
mitigates the latency issue.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Enabling LRO offload per queue makes sense because the user will
probably want to allocate different mempool for LRO queues - the LRO
mempool mbuf size may be bigger than non LRO mempool.
Change the LRO offload to be per queue instead of per port.
If one of the queues is with LRO enabled, all the queues will be
configured via DevX.
If RSS flows direct TCP packets to queues with different LRO enabling,
these flows will not be offloaded with LRO.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When a user configures LRO in the port offloads, he probably wants each
TCP packet will have a chance to open an LRO session.
The PMD wasn't configure LRO in the flow TIR if the flow is not
explicitly configured TCP item despite the flow included TCP traffic.
For example, the next flows were not LRO offloaded:
pattern eth / end, pattern eth / ip / end, pattern eth / ipv6 / end.
Enable LRO configuration for all the TIRs if LRO is configured in the
port.
No performance impact for non-LRO traffic in these TIRs.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.
In this case the SW should fix the relevant packet headers because
the HW doesn't update them according to the new created packet
characteristics but provides the update values in the CQE.
Add update header code to the regular Rx burst function to support LRO
feature.
Make sure the first mbuf has enough space to include each TCP header,
otherwise the header update may cross mbufs what complicates the
operation too match.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The alignment requested by the FW for WQ buffer allocation is 512.
Change it from cache line alignment to 512.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
LRO support was only for MPRQ, hence mprq Rx burst was selected when
LRO was configured in the port.
The current support for MPRQ is suffering from bad memory utilization
since an external mempool is allocated by the PMD for the packets data
in addition to the user mempool, besides that, the user may get packet
data addresses which were not configured by him.
Even though MPRQ has the best performance for packet receiving in the
most cases and because of the above facts it is better to remove the
automatic MPRQ select when LRO is configured.
Move MPRQ to be selected only when the user force it by the PMD
arguments including LRO case.
Allow LRO offload using the regular RQ with the regular Rx burst
function.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When the Rx queue is not in striding RQ mode it should be configured as
cyclic RQ.
In this case the type remains 0 which means linked-list type.
Set the RQ type to be cyclic when the queue is not in striding RQ mode.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The WQ size configuration via DevX didn't take into account the maximum
number of segments per packet what wrongly caused to configure bigger
WQE size than the size expected by the PMD in other places.
The scatter mode stride size should be the size of segment multiplied
by the number of maximum segments per packet.
The number of WQEs per WQ should be the number of descriptors divided by
the number of the maximum segments per packet.
Fix the size calculations to the above rule.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Patch [1] zeroes the mbuf headroom when the port is configured with LRO
because when working with more than one stride per packet the HW cannot
guaranty an headroom in the start stride of each packet.
Change the solution to support mbuf headroom by adding an empty buffer
as the first packet segment, scatter mode must be enabled to support it.
[1] http://patches.dpdk.org/patch/56912/
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When mbuf is allocated by rte_pktmbuf_alloc the offload flag is reset by
it, so data-path function should not do it again.
Remove the above offload flag reset from MPRQ data-path.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The field max_rx_pkt_len in Rx configuration indicates the maximum size
for Rx packet to be received.
There was no any field to indicate the maximum size of LRO packet to be
received by the application.
Assuming the user configures max_rx_pkt_len as the maximum LRO packet
length when LRO is configured on the port, the PMD limits the maximum
LRO packet size received from HW to be max_rx_pkt_len.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
If the mbuf size of the Rx mempool supplied by the user in the Rx setup
is unable to contain the maximum Rx packet length in addition to the
mbuf head-room, the Rx scatter offload must be configured. Otherwise,
there is not enough space in single mbuf to contain a packet with size
of the maximum Rx packet length.
The PMD did not return an error in the above mentioned case.
Return an error in the above case.
Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Fixes: edad38fcd0 ("net/mlx: enhance Rx scatter mode detection")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The patch fix the issue that LLDP packet can't be forwarded to host.
Fixes: 59d151de66 ("net/ice: stop LLDP by default")
Cc: stable@dpdk.org
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Prior to the fix, RTE_LOGTYPE_INFO messages were displayed by default.
After the fix, only NOTICE level and higher were displayed by default
and INFO level were not. There are INFO level vNIC config related
messages which customers and tech support currently depend on for
debugging and so on and to suddenly hide these messages is not a good
idea.
This patch changes the default log level to RTE_LOG_INFO for enic so
messages are printed as before the fix.
Fixes: bbd8ecc054 ("net/enic: remove PMD log type references")
Signed-off-by: John Daley <johndale@cisco.com>
Adding support to parse GRE KEY for octeontx2 Flow.
Matching on GRE Key will only work, if checksum and routing
bits in the GRE header are equal to 0.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch implements read clock api whose purpose is to return
raw clock ticks. Using this API real time ticks spent in
processing a packet can be known:
<read_clock val at any time> - mbuf->timestamp
Calling mbox for reading raw clock ticks in fastpath is very
expensive so its value is derived from time stamp counter(tsc)
using freq multiplier (ratio of raw clock ticks and tsc) and clock
delta (by how much tsc is lagging from raw clock value).
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
A huge drop in per core MPPS value was observed when PTP stack is
enabled. The reason behind the bottleneck is HW serialises the
transfer of all SQEs, which seeks timestamp capture, on the same
send DMA path. Hence only those packets which requires timestamp
capture should set SETTSTAMP in send mem alg.
With this patch timestamping would be done only for those packets
with PKT_TX_IEEE1588_TMST set.
Fixes: fb3ae0951a ("net/octeontx2: support Tx")
Fixes: 8980a15300 ("event/octeontx2: support PTP for SSO")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Earlier implementation for enabling ptp via RX offload flag was
causing segmentation fault as it was getting executed in the
device configuration stage where RX and TX queues were not
configured. As in the ptp enable process rx queues are used for
mbuf setup while tx queues are used for send descriptor setup.
Moving the logic in dev start as all the resources will be
configured.
Fixes: b5dc314044 ("net/octeontx2: support base PTP")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Multi segmented packet may be spliced with indirect mbufs also.
Currently driver causes buffer leak for indirect mbufs as they
were not being freed to packet pool.
Patch fixes handling of indirect mbufs for following use cases
- packet contains all indirect mbufs only.
- packet contains mixed mbufs i.e. direct and indirect both.
Fixes: cbd5710db4 ("net/octeontx2: add Tx multi segment version")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
missed_pkts reflects the number of packets that the driver did not manage
to send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: 5f05e95cd5 ("net/vhost: fix Tx error counting")
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
err_pkts reflects the number of packets that the driver did not manage
to send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: e1e4017751 ("ring: add new driver")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
n_err reflects the number of packets that the driver did not manage to
send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: 09c7e63a71 ("net/memif: introduce memory interface PMD")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
err_pkts reflects the number of packets that the driver did not manage to
send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: 75e2bc54c0 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The delta between what the application asked to receive and what was
indeed received, can not be called an error counter.
This counter is not reported anywhere, remove it.
Fixes: 75e2bc54c0 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This Tx counter has never been used.
Fixes: 9658d17da2 ("virtio: maintain stats per queue")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
This Tx counter has never been used.
Fixes: c743e50c47 ("null: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This Tx counter is now unused.
Fixes: 10edf857fd ("net/af_xdp: make reserve/submit peek/release consistent")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
This Rx counter has never been used.
Fixes: 364e08f2bb ("af_packet: add PMD for AF_PACKET-based virtual devices")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Filling correct RETA table size at ixgbevf_dev_info_get,
so RETA table update will be supported for VF port.
For X540_vf and 82599_vf, since they don't support
RETA table update, set RETA size to 0.
Fixes: 2144f6630f ("ixgbe: add redirection table size in device info")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This commit fixes an issue with the error checking in flow
MARK action. Previously, (ANY + MARK) would fail, as the
(mark_spec == 0) condition would cause an early error return,
however really it is (mark_spec != 0) that should cause the
early error return.
Flipping the binary comparison corrects the behaviour, and
(ANY + MARK) now succeeds, while (MARK + MARK) fails.
Fixes: 0bbcfc706a ("net/i40e: support MARK and RSS flow action")
Suggested-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Mesut Ali Ergin <mesut.a.ergin@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Change loop index from uint16_t to uint32_t since max
index 65535 could be exceeded when ring size is 2k+.
Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Cc: stable@dpdk.org
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Two cores can send multi segment packets on two different pcap ports.
Because of this, we can't have one single buffer to linearize packets.
Use rte_pktmbuf_read() to copy the packet into a buffer on the stack
and remove eth_pcap_gather_data() when necessary (if the mbuf is
contiguous, rte_pktmbuf_read() just points at the buffer address).
Fixes: 6db141c91e ("pcap: support jumbo frames")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
When a packet cannot be transmitted, the driver is supposed to free this
packet and report it as handled.
This is to prevent the application from retrying to send the same packet
and ending up in a liveloop since the driver will never manage to send
it.
Fixes: 49a0a2ffd5 ("net/pcap: fix possible mbuf double freeing")
Fixes: 6db141c91e ("pcap: support jumbo frames")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
If the pkt pool contains only buffers smaller than the default headroom,
then the driver will compute an invalid buffer size (negative value cast
to an uint16_t).
Rely on the mbuf api to check how much space is available in the mbuf.
Fixes: 6eb0ae218a ("pcap: fix mbuf allocation")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Change verbosity of a message to DEBUG from ERROR.
This is just debug message.
Fixes: 3e92fd4e4e ("net/bnxt: use dynamic log type")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Use rte_cpu_to_le_16/32 while parsing the hwrm command response.
Fixes: 11e5e19695 ("net/bnxt: support redirecting tunnel packets to VF")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Modified to send the tunnel redirect commands to Chimp HWRM channel as
Kong does not support these commands.
Fixes: 11e5e19695 ("net/bnxt: support redirecting tunnel packets to VF")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
We were trying to fill in more rx extended stats than the size allocated
for stats causing segfault. Fixed this by adding an explicit check.
Rearranged the code to return statistic values in xstats_get as per the
names returned in xstats_get_names.
Fixes: f55e12f334 ("net/bnxt: support extended port counters")
Cc: stable@dpdk.org
Signed-off-by: Rahul Gupta <rahul.gupta@broadcom.com>
Signed-off-by: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
This commit enables the creation of a dedicated completion
ring for asynchronous event handling instead of handling these
events on a receive completion ring.
For the stingray platform and other platforms needing tighter
control of resource utilization, we retain the ability to
process async events on a receive completion ring.
For Thor-based adapters, we use a dedicated NQ (notification
queue) ring for async events (async events can't currently
be received on a completion ring due to a firmware limitation).
Rename "def_cp_ring" to "async_cp_ring" to better reflect its
purpose (async event notifications) and to avoid confusion with
VNIC default receive completion rings.
Allow rxq 0 to be stopped when not being used for async events.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Substitute driver-defined IS_P2ALIGNED() with EFX_IS_P2ALIGNED()
defined in libefx.
Add type argument and cast value and alignment to one specified type.
Fixes: e1b9445985 ("net/sfc: build libefx")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Substitute driver-defined P2ALIGN() with EFX_P2ALIGN() defined in
libefx.
Cast value and alignment to one specified type to guarantee result
correctness.
Fixes: e1b9445985 ("net/sfc: build libefx")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Substitute driver-defined P2ROUNDUP() h with EFX_P2ROUNDUP()
defined in libefx.
Cast value and alignment to one specified type to guarantee result
correctness.
Fixes: e1b9445985 ("net/sfc: build libefx")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Function rxq_release_rq_resources() releases resources of RQ object
created by DevX API.
This patch updates this function to properly clear the released
resources, to avoid repeated release of the same resource.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_rxq_release() calls mlx5_release_dbr() to release the
doorbell allocated for this Rx queue.
This call is relevant only for Rx queue objects created using
DevX API.
This patch adds the required check, to call mlx5_release_dbr()
only when relevant.
It also updates mlx5_release_dbr() to use the input offset correctly.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
For VXLAN/NVGRE packet, vni/tni should be included in the matching
keys. This patch fixes this issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The memzone's start virtual address pointer (addr) is of type void *,
no need to add type cast.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes X722 VF problem when received packet don't have
HASH value.
1) Packet classifier types update should support X722 VF, not only
for X722 PF;
2) MAC type is invalid for X722 VF when set packet classifier type,
so move it after MAC type is set correctly;
Fixes: a286ebeb07 ("net/i40e: add dynamic mapping of SW flow types to HW pctypes")
Cc: stable@dpdk.org
Signed-off-by: Peng Huang <peng.huang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When the VF configuration is larger than the number of queues reserved
by PF, VF sends the request queue command through admin queue. When PF
received this command, it may reset the VF and send a notification
before resetting. If this notification is read by the timed task alarm,
Task request queue will lost notification. This patch prevents two
tasks from running simultaneously.
Fixes: ee653bd800 ("net/i40e: determine number of queues per VF at run time")
Cc: stable@dpdk.org
Signed-off-by: Tao Zhu <taox.zhu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
There was an issue with ice_and_bitmap and ice_or_bitmap when
dealing with bit array sizes that are not even multiples of 32,
where some of relevant bits in the highest 32 bits were being
cleared. This patch fixes those problems.
Fixes: c9e37832c9 ("net/ice/base: rework on bit ops")
Cc: stable@dpdk.org
Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Cleanup hardware registers macros in ice_auto_generator.h.
Fixes: 51c7f09f3f ("net/ice/base: add registers for Intel E800 Series NIC")
Cc: stable@dpdk.org
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
use __func__ instead of function name in ice_debug calls.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Change ptype variable to correctly be 16-bits in ice_prof_map
structure.
Fixes: 51d04e4933 ("net/ice/base: add flexible pipeline module")
Cc: stable@dpdk.org
Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
We don't free s_rule if ice_aq_sw_rules() returns a non-zero status. If
it returned a zero status, s_rule would be freed right after, so this
implies it should be freed within the scope of the function regardless.
Fixes: c7dd159311 ("net/ice/base: add virtual switch code")
Cc: stable@dpdk.org
Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The dummy packets for GRE were set up for IP, but not inner
TCP or UDP. There are some applications that want to be
able to parse on those inner L4 headers so add them to
the dummy packets.
Also, the GRE dummy packet was formatted differently from
the other dummy packets so change the formatting to match
all the other dummy packets.
Fixes: 839c0a4b77 ("net/ice/base: enable additional switch rules")
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Unit hang may occur if multiple descriptors are available in the rings
during reset or close. This state can be detected by configure status
by bit 8 in register. If the bit is set and there are pending
descriptors in one of the rings, we must flush them before reset or
close.
Fixes: 805803445a ("e1000: support EM devices (also known as e1000/e1000e)")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add missing return after setting the error status in case of
invalid flush_flag in the operation.
The issue was found by the coverity scan as the fin_flush variable,
not initialized in such case, was used later in the flow.
Coverity issue: 340859
Fixes: c7b436ec95 ("compress/zlib: support burst enqueue/dequeue")
Cc: stable@dpdk.org
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Some builds with clang report an error because '<>' rather than '""' were
used for including the ioat spec header file.
Target: x86_64-native-bsdapp-clang
error: 'rte_ioat_spec.h' file not found with <angled> include; use "quotes" instead
#include <rte_ioat_spec.h>
^~~~~~~~~~~~~~~~~
"rte_ioat_spec.h"
1 error generated.
Since this file should always be in the same directory as the main header,
we can safely change the include line to fix this error.
Fixes: abff4333ec ("raw/ioat: create device on probe and destroy on release")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
NVGRE has a GRE header with c_rsvd0_ver value 0x2000 and protocol
value 0x6558.
These should be matched when item_nvgre is provided.
This patch adds validation function of NVGRE item.
It also updates the translate function of NVGRE item, to add the
required values, if they were not specified.
Original work by Xiaoyu Min <jackmin@mellanox.com>
Fixes: fc2c498ccb ("net/mlx5: add Direct Verbs translate items")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Xiaoyu Min <jackmin@mellanox.com>
LRO message is contained in the MPRQ strides.
While the LRO message size cannot be bigger than 65280 according to the
PRM, the strides which contain it may be bigger than the maximum buffer
size allowed in dpdk mbuf - 0xFFFF.
Adjust the maximum LRO message size to avoid buffer length overflow.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
LRO packet may consume all the stride memory, hence the PMD cannot
guaranty head-room for the LRO mbuf.
The issue is lack in HW support to write the packet in offset from the
stride start.
A new striding RQ feature may be added in CX6 DX to allow head-room and
tail-room for the LRO strides.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.
In this case the SW should fix the relevant packet headers because the
HW doesn't update them according to the new created packet
characteristics.
Add update header code to the mprq Rx burst function to support LRO
feature.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Update the CQE structure to include LRO fields.
Some reserved values were changed, hence also data-path code used the
reserved values were updated accordingly.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
As an arrangement to the LRO support when a packet can consume all the
stride memory, the external mbuf shared information cannot be anymore
in the end of the stride, because the HW may write the packet data to
all the stride memory.
Move the shared information memory from the stride to the control
memory of the external mbuf.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement LRO support using a single RQ object per DPDK RxQ.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_rxq_obj_new(), previously called mlx5_rxq_ibv_new(),
supports creating Rx queue objects using verbs.
This patch expands the relevant functions, to support creating
verbs or DevX Rx queue objects:
Function mlx5_rxq_obj_new() updated to create RQ object using DevX.
Function mlx5_ind_table_obj_new() updated to create RQT object using DevX.
Function mlx5_hrxq_new() updated to create TIR object using DevX.
New utility functions added to perform specific operations:
mlx5_devx_rq_new(), mlx5_devx_wq_attr_fill(),
mlx5_devx_create_rq_attr_fill().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Verbs WQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of verbs WQ to dedicated function
mlx5_ibv_wq_new().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Verbs CQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of CQ to dedicated function
mlx5_ibv_cq_new().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_alloc_shared_ibctx() allocates Protection Domain using
verbs API, as part of shared IB device context.
This patch adds reading and storing of pdn value from the created PD
object, using DV API.
The pdn value is required when creating WQ using DevX API.
This patch also updates function flow_dv_create_counter_stat_mem_mng()
which uses the pdn value as well.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_queue_state_modify_primary() was implemented to handle
state change for queues created using Verbs API.
This patch update function mlx5_queue_state_modify_primary() to
support state change of RQ object created using DevX API.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing use of DevX TIR object.
Hash Rx queue is currently created using verbs QP only.
The next patches will add the option to create it with a TIR object
using DevX.
This patch renames hrxq_ibv to hrxq wherever relevant, and adds
the DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing of DevX RQT object.
Rx indirection table object is currently created using verbs only.
The next patches will add the option to create an RQT object using
DevX.
This patch renames ind_table_ibv to ind_table_obj wherever relevant,
and adds the DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing of DevX RxQ object.
RxQ object is currently created using verbs only.
The next patches will add the option to create RxQ object using DevX.
This patch renames rxq_ibv to rxq_obj wherever relevant, and adds the
DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>