Use CRC32 instruction only when it is available to avoid
the build issue like below.
{standard input}:16: Error:
selected processor does not support `crc32cx w3,w3,x0'
Fixes: ea7be0a038 ("lib/librte_table: add hash function headers")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Use mempool bulk get ops to alloc burst of packets and process them.
If bulk get fails fallback to rte_mbuf_raw_alloc.
Tested-by: Yingya Han <yingyax.han@intel.com>
Suggested-by: Andrew Rybchenko <arybchenko@solarflare.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Move the packet prepare logic into a separate function so that it
can be reused later.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Use bulk ops for allocating segments instead of having a inner loop
for every segment.
This reduces the number of calls to the mempool layer.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Testpmd txonly copies the src/dst mac address of the port being
processed to ethernet header structure on the stack for every packet.
Move it outside the loop and reuse it.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Rx refill in one bulk (which is just 8 descriptors) by default is too
aggressive and makes too many MMIO writes (Rx doorbells) if packet rate
is high. Setting default to 1/8 of Rx descriptors number shows good
performance results. Anyway it is a default value which may be
overridden by Rx configuration provided by application.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Clarify the fact that mask bits should be set in rte_eth_reta_query.
Signed-off-by: Tom Barbette <barbette@kth.se>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Enable DEV_TX_OFFLOAD_VLAN_INSERT also along with
DEV_TX_OFFLOAD_VLAN_QINQ in tx_qinq_set() as it takes
both vlan id's as arguments.
Fixes: 597f9fafe1 ("app/testpmd: convert to new Tx offloads API")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tx VLAN & QinQ insert enable need not depend on
Rx VLAN offload ETH_VLAN_EXTEND_OFFLOAD. For Tx VLAN
insert enable, error check is now to see if QinQ was enabled
but only single VLAN id is set.
Fixes: 6a34f91690 ("app/testpmd: fix error message when setting Tx VLAN")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch added new item "vxlan-gpe" to tunnel_type to
support new VXLAN-GPE packet type, and its classification.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new protocol type VXLAN-GPE support for UDP tunnel.
inner IP/TCP/UDP checksum and RSS configuration shared
the same implementation of VXLAN.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch added VXLAN-GPE macro in rte_eth_tunnel_type.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently, if requested MTU is bigger than mbuf size and scattered
receive is not enabled, setting MTU to that value fails.
This patch allows setting this special MTU when device is stopped,
because scattered_rx will be re-configured during next port start
and driver may enable scattered receive according new MTU value.
After this patch, driver may select different receive function
automatically after MTU set, according MTU values selected.
Signed-off-by: David Harton <dharton@cisco.com>
Reviewed-by: Wei Zhao <wei.zhao1@intel.com>
The driver must send its version information to the firmware, so
the firmware knows the driver is up. Otherwise, it will cause unexpected
OS package downloading when multiple driver instances running on the
same device.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
This patch enable 10Mb/s link for ixgbe X553.
This new device has own device id of 0x15E4 and 0x15E5, so
ixgbe PMD driver need to special check when setup link for
these two types of device.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
We are going to share the Direct Rules and Direct Verbs flow
device data structures between master and representors in the
E-Switch configurations over multiport IB device.
The code of initializing and destroying these data is
moved to dedicated routines, this is just a preparation
step for actual data sharing.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
A PMD might use rte_vlan_insert to implement Tx VLAN offload. Typically
the PMD will insert the VLAN header in the transmit path and then
attempt to send the packets. If this fails, the packets are returned to
the application which may attempt to send these packets again. If the
PKT_TX_VLAN flag is not cleared, the transmit path may attempt to insert
the VLAN header again.
Fixes: 47aa48b969 ("net: fix stripped VLAN flag for offload emulation")
Cc: stable@dpdk.org
Signed-off-by: Bill Hong <bhong@brocade.com>
Signed-off-by: Chas Williams <chas3@att.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add a new PMD driver for AF_XDP which is a proposed faster version of
AF_PACKET interface in Linux. More info about AF_XDP, please refer to [1]
[2].
This is the vanilla version PMD which just uses a raw buffer registered as
the umem.
[1] https://fosdem.org/2018/schedule/event/af_xdp/
[2] https://lwn.net/Articles/745934/
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
When using Direct Rules we can add actions to jump between tables.
This is extra useful since rule insertion rate is much higher on other
tables compared to table zero.
If no group is selected the rule is added to group 0.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Adds calls to the Direct Rules API inside the glue functions.
Due to difference in parameters between the Direct Rules and Direct
Verbs some of the glue functions API was updated.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
This is the first patch of a series that is designed to enable the
Direct Rules API.
The main difference between Direct Verbs and Direct Rules from API
perspective is that in Direct Rules each action has it's own create
function and the object itself is of type void.
In this patch I'm adding functions to generate actions that currently
are done without create action, and I'm changing the action type to be
void *, so in next patches only the glue functions will need to change.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Handle VXLAN and GENEVE TSO on EF10 native Tx datapath.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Said message cannot be considered as warning since
the PMD anyway reports available offload capabilities
by means of device info interface. Make this log
message informational and improve its formatting
by placing the text itself on the same line.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
As a result, code duplication will be avoided in the current
TSO implementations (EFX and EF10 native). The future patch to
add support for tunnel TSO will also reuse the new function.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Make Tx prepare function able to detect packets with invalid header
size when header linearization is required.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add descriptor space check to Tx prepare function to inform a caller
that a packet that needs more than maximum Tx descriptors of a queue
can not be sent.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tx offloads checks should be done in Tx prepare.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Implement tx_prepare callback. The implementation checks for anything
only in RTE debug mode. No checks are done otherwise because EF10
simple datapath ignores Tx offloads.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Implement tx_prepare callback and update Tx burst function accordingly.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Implement generic checks in Tx prepare function and update Tx burst
function accordingly.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Numbers of extra descriptors required for TSO are EF10-specific
in fact. Highlight it in define names.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.
Fixes: 6bc985e411 ("net/sfc: support TSO in EF10 Tx datapath")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.
Fixes: fec33d5bb3 ("net/sfc: support firmware-assisted TSO")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The API that was defined in OFED 4.5 was replaced both in OFED 4.6 and
in upstream.
This commit updates the API to match the upstream one.
Fixes: f5bf91de73 ("net/mlx5: support flow counters using devx")
Cc: stable@dpdk.org
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
When removing a rte_device on a port-based request,
all the sibling ports must be marked as closed.
The iterator loop can be simplified by using the dedicated macro.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Iterating over siblings was done with RTE_ETH_FOREACH_DEV()
which skips the owned ports.
The new iterators RTE_ETH_FOREACH_DEV_SIBLING()
and RTE_ETH_FOREACH_DEV_OF() are more appropriate and more correct.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
If multiple ports share the same hardware device (rte_device),
they are siblings and can be found thanks to the new functions
and loop macros.
One iterator takes a port id as reference,
while the other one directly refers to the parent device.
The ownership is not checked because siblings may have
different owners.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
There are three states for an ethdev port.
Checking that the port is unused looks simpler than
checking it is neither attached nor removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The Memory Region (MR) for DMA memory can't be created from secondary
process due to lib/driver limitation. Whenever it is needed, secondary
process can make a request to primary process through the EAL IPC
channel (rte_mp_msg) which is established on initialization. Once a MR
is created by primary process, it is immediately visible to secondary
process because the MR list is global per a device. Thus, secondary
process can look up the list after the request is successfully returned.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
A new PMD parameter (mr_ext_memseg_en) is added to control extension of
memseg when creating a MR. It is enabled by default.
If enabled, mlx4_mr_create() tries to maximize the range of MR
registration so that the LKey lookup tables on datapath become smalle
and get the best performance. However, it may worsen memory utilization
because registered memory is pinned by kernel driver. Even if a page in
the extended chunk is freed, that doesn't become reusable until the
entire memory is freed and the MR is destroyed.
To make freed pages available immediately, this parameter has to be
turned off but it could drop performance.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
The Memory Region (MR) for DMA memory can't be created from secondary
process due to lib/driver limitation. Whenever it is needed, secondary
process can make a request to primary process through the EAL IPC
channel (rte_mp_msg) which is established on initialization. Once a MR
is created by primary process, it is immediately visible to secondary
process because the MR list is global per a device. Thus, secondary
process can look up the list after the request is successfully returned.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
A new PMD parameter (mr_ext_memseg_en) is added to control extension of
memseg when creating a MR. It is enabled by default.
If enabled, mlx5_mr_create() tries to maximize the range of MR
registration so that the LKey lookup tables on datapath become smaller
and get the best performance. However, it may worsen memory utilization
because registered memory is pinned by kernel driver. Even if a page in
the extended chunk is freed, that doesn't become reusable until the
entire memory is freed and the MR is destroyed.
To make freed pages available immediately, this parameter has to be
turned off but it could drop performance.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Secondary process is not allowed to register MR due to a restriction of
library and kernel driver.
Fixes: 7e43a32ee0 ("net/mlx5: support externally allocated static memory")
Cc: stable@dpdk.org
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
In order to support secondary process, a few features are required.
a) rdma-core library should allocate device resources using DPDK's
memory allocator.
b) UAR should be remapped for secondary processes. Currently, in order
not to use different data structure for secondary processes, PMD
tries to reserve identical virtual address space for both primary
and secondary processes.
c) IPC channel is necessary, which can be easily set with rte_mp APIs.
Through the channel, Verbs command FD is delivered to the secondary
process and the device stop/start event is also broadcast from
primary process.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
To support secondary process, the memory allocated by library such as
completion rings (CQ) and buffer rings (WQ) must be manageable by EAL,
in order to share it with secondary processes. With new changes in
rdma-core and kernel driver, it is possible to provide an external
allocator to the library layer for this purpose. All such resources
will now be allocated within DPDK framework.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>