By definition, RSS involves some kind of hash algorithm, usually Toeplitz.
Until now it could not be modified on a flow rule basis and PMDs had to
always assume RTE_ETH_HASH_FUNCTION_DEFAULT, which remains the default
behavior when unspecified (0).
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Since its inception, the rte_flow RSS action has been relying in part on
external struct rte_eth_rss_conf for compatibility with the legacy RSS API.
This structure lacks parameters such as the hash algorithm to use, and more
recently, a method to tell which layer RSS should be performed on [1].
Given struct rte_eth_rss_conf will never be flexible enough to represent a
complete RSS configuration (e.g. RETA table), this patch supersedes it by
extending the rte_flow RSS action directly.
A subsequent patch will add a field to use a non-default RSS hash
algorithm. To that end, a field named "types" replaces the field formerly
known as "rss_hf" and standing for "RSS hash functions" as it was
confusing. Actual RSS hash function types are defined by enum
rte_eth_hash_function.
This patch updates all PMDs and example applications accordingly.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
[1] commit 676b605182a5 ("doc: announce ethdev API change for RSS
configuration")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch replaces C99-style flexible arrays in struct rte_flow_action_rss
and struct rte_flow_item_raw with standard pointers to the same data.
They proved difficult to use in the field (e.g. no possibility of static
initialization) and unsuitable for C++ applications.
Affected PMDs and examples are updated accordingly.
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Fixes: b1a4b4cbc0a8 ("ethdev: introduce generic flow API")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This patch makes the following changes to flow rule actions:
- List order now matters, they are redefined as performed first to last
instead of "all simultaneously".
- Repeated actions are now supported (e.g. specifying QUEUE multiple times
now duplicates traffic among them). Previously only the last action of
any given kind was taken into account.
- No more distinction between terminating/non-terminating/meta actions.
Flow rules themselves are now defined as always terminating unless a
PASSTHRU action is specified.
These changes alter the behavior of flow rules in corner cases in order to
prepare the flow API for actions that modify traffic contents or properties
(e.g. encapsulation, compression) and for which order matter when combined.
Previously one would have to do so through multiple flow rules by combining
PASSTRHU with priority levels, however this proved overly complex to
implement at the PMD level, hence this simpler approach.
This breaks ABI compatibility for the following public functions:
- rte_flow_create()
- rte_flow_validate()
PMDs with rte_flow support are modified accordingly:
- bnxt: no change, implementation already forbids multiple actions and does
not support PASSTHRU.
- e1000: no change, same as bnxt.
- enic: modified to forbid redundant actions, no support for default drop.
- failsafe: no change needed.
- i40e: no change, implementation already forbids multiple actions.
- ixgbe: same as i40e.
- mlx4: modified to forbid multiple fate-deciding actions and drop when
unspecified.
- mlx5: same as mlx4, with other redundant actions also forbidden.
- sfc: same as mlx4.
- tap: implementation already complies with the new behavior except for
the default pass-through modified as a default drop.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add new callbacks for eth_dev_ops of i40e to get the information
and data of plugin module eeprom.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add new callbacks for eth_dev_ops of e1000 to get the information and
data of plugin module EEPROM.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add new callbacks for eth_dev_ops of ixgbe to get the information
and data of plugin module eeprom.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The basic operations for ports enumeration should not be
considered as experimental in DPDK 18.05.
The iterator RTE_ETH_FOREACH_DEV was introduced in DPDK 17.05.
It uses the function the rte_eth_find_next_owned_by() to get
only ownerless ports. Its API can be considered stable.
So the flag experimental is removed from rte_eth_find_next_owned_by().
The flag experimental is removed from rte_eth_dev_count_avail()
which is the new name of the old function rte_eth_dev_count().
The flag experimental is set to rte_eth_dev_count_total()
in the .c file for consistency with the declaration in the .h file.
A lot of internal applications are fixed to not allow experimental API.
Fixes: 8728ccf37615 ("fix ethdev ports enumeration")
Fixes: d9a42a69febf ("ethdev: deprecate port count function")
Fixes: e70e26861eaf ("net/mvpp2: fix build")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: David Marchand <david.marchand@6wind.com>
Change max inline header length to 192B to allow IPv6 VXLAN TSO headers
and header with options that more than 128B.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This commit adds support for generic tunnel TSO and checksum offload.
PMD will compute the inner/outer headers offset according to the
mbuf fields. Hardware will do calculation based on offsets and types.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Separate TSO function to make logic of mlx5_tx_burst clear.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Move some code in DPDK callbacks to add/remove MAC addresses to internal
function. This modification will be necessary to handle implement the
devop set_mc_addr_list.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
These drivers never attempt link speed negotiation. Change link_autoneg
value to ETH_LINK_FIXED to be more accurate and consistent between PMDs.
Fixes: 1e3a958f40b3 ("ethdev: fix link autonegotiation value")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
build error:
.../dpdk/drivers/net/tap/rte_eth_tap.c(598):
error #279: controlling expression is constant
RTE_ASSERT(!"unsupported request type: must not happen");
Although RTE_ASSERT helps debugging this issue when assert enabled,
constant expression in assert means this path can be taken during
runtime and there is no protection against it when assert is disabled.
Adding error log and error return back, replacing RTE_ASSERT.
Fixes: 7748a4b44196 ("net/tap: add debug messages")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Ethernet devices which are grouped by bonding PMD, aka slaves, are
sharing the same queues and RSS configurations and their Rx burst
functions must be managed by the bonding PMD according to the bonding
architecture.
So, it makes sense to configure the same flow rules for all the bond
slaves to allow consistency in packet flow management.
Add rte flow support to the bonding PMD to manage all flow
configuration to the bonded slaves.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
PMDs have the responsibility of releasing mbufs sent through xmit burst
function. NFP PMD attaches those sent mbufs to the TX ring structure,
and it is at the next time a specific ring descriptor is going to be
used when the previous linked mbuf, already transmitted at that point,
is released. Those mbufs belonging to a chained mbuf got its own link
to a ring descriptor, and they are released independently of the mbuf
head of that chain.
The problem is how those mbufs are released when the PMD is stopped or
closed. Instead of releasing those mbufs as the xmit functions does,
this is independently of being in a mbuf chain, the code calls
rte_pktmbuf_free which will release not just the mbuf head in that
chain but all the chained mbufs. The loop will try to release those
mbufs which have already been released again when chained mbufs exist.
This patch fixes the problem using rte_pktmbuf_free_seg instead.
Fixes: b812daadad0d ("nfp: add Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
There is an option to run a non-netvsc device as a netvsc device only
when the "force" parameter is set to 1 in the EAL command line.
Consequently, more than one device may be found to be matching the
"mac" parameter specifying the device.
Prefer netvsc devices to be scanned before any non-netvsc device, even
when the "force" parameter is set.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
There are now 2 ways to specify a netvsc device by the EAL command
line - either by the interface name or by the MAC address.
The user should not specify a netvsc device using more than 1 way,
Thus, if a device is specified in more than 1 way, the driver stops
to probe it.
Validate it in the driver initialization.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
If the netvsc driver starts in blacklist mode, it does not
automatically probe IP associated netvsc devices. Therefore, the only
way to probe them is to specify them by the EAL command line, using the
"force" parameter to skip the IP check in the driver.
>From now on, the user does not need to add the "force" parameter if he
specifies an IP associated netvsc device by the EAL command line, and the
responsibility of the IP check is now in the user's hands.
However, in the absence of any specification, the driver still skips IP
associated netvsc devices.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Prior to this commit the vdev_netvsc PMD was creating tap and failsafe
devices with long names, such as "net_tap_net_vdev_netvsc0" or
"net_failsafe_net_vdev_netvsc0".
This commits creates tap and failsafe devices with short names such as
"net_tap_netvsc0" or "net_failsafe_netvsc0".
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Add DEV_TX_OFFLOAD_TCP_TSO to failsafe Tx offload default capabilities.
The net result of failsafe Tx capabilities is the logical AND of Tx
capabilities among all failsafe sub_devices and failsafe own default
capabilities.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
When allocating a new mbuf for Rx, the value of m->data_off should be
reset to its default value (RTE_PKTMBUF_HEADROOM), instead of reusing
the previous undefined value, which could cause the packet to have a
too small or too high headroom.
Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
Acked-by: Harish Patil <harish.patil@cavium.com>
Packets containing empty segments are dropped by hypervisor, prevent
this case by skipping empty segments in transmission.
Also drop empty mbufs to be sure that at least one segment is transmitted
for each mbuf.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
When several TCP fragments are contained in a packet that is only one mbuf
segment long, vmxnet3 receives an empty segment following first one, that
contains offload information. In current version, this segment is
propagated as is to upper application.
Remove those empty segments directly when receiving buffers, they may
generate unneeded extra processing in the upper application.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Not so old variants of vmxnet3 do not provide MSS value along with
LRO packet. When this case happens, try to guess MSS value with
information at hand.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Add support for IPv6, LRO and properly set packet type in all
supported cases.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
In case we are working on a multisegment buffer, most bit are set
in last segment of the buffer. Correctly look at those bits in eop part
of the rx_offload function.
Fixes: 2fdd835f992c ("vmxnet3: support jumbo frames")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Offloads are split between first and last segment of a packet.
Call a single vmxnet3_rx_offload function that will contain all
offload operations. This patch does not introduce any code modification.
Pass a vmxnet3_hw as parameter to the function, it is not presently
used in this patch, but will be later used for TSO offloads.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Rather than parsing IP header to get proper ptype to return, just return
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, that tells application that we have an IP
packet with unknown header length.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Yong Wang <yongwang@vmware.com>
During the transition to resurrect flow director on top of rte_flow, mask
handling was removed by mistake.
Fixes: 4c3e9bcdd52e ("net/mlx5: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This will help to bring back the mask handler which was removed when this
feature was rewritten on top of rte_flow.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The memory region is [start, end), so if the memseg of 'end' isn't
allocated yet, the returned memseg will have zero entries and this will
make 'end' zero (nil).
Fixes: c2fe5823224a ("net/mlx4: use virt2memseg instead of iteration")
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The memory region is [start, end), so if the memseg of 'end' isn't
allocated yet, the returned memseg will have zero entries and this will
make 'end' zero (nil).
Fixes: 718e35999c96 ("net/mlx5: use virt2memseg instead of iteration")
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Initialize mbuf->data_off to RTE_PKTMBUF_HEADROOM after allocation.
Without this, it might be possible that the DMA address provided
to the HW may not be in sync to what is indicated to the application
in bnxt_rx_pkt.
Fixes: 2eb53b134aae ("net/bnxt: add initial Rx code")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In some cases bnxt_hwrm_cfa_l2_set_rx_mask is being called before
VNICs are allocated. The FW returns an error in such cases.
Move bnxt_init_nic to bnxt_dev_init such that the ids are initialized
to an invalid id.
Prevent sending the command to the FW only with a valid vnic id.
Fixes: 244bc98b0da7 ("net/bnxt: set L2 Rx mask")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
While creating TX, Rx, CQ rings use cached DB address instead of
getting it from the PCI memory resource.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The vmxnet3 never attempts link speed negotiation. As a virtual device
the link speed is vague at best. However, it is important for certain
applications, like bonding, to see a consistent link_status. 802.3ad
requires that only links of the same cost (link speed) be enslaved.
Keeping the link status consistent in vmxnet3 avoids races with bonding
enslavement.
Fixes: 1e3a958f40b3 ("ethdev: fix link autonegotiation value")
Cc: stable@dpdk.org
Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Update link status related feature document items and minor updates in
some link status related functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Vmxnet3 driver supports receive data ring viz. a set of small sized
buffers that are always mapped by the emulation. If a packet fits into
the receive data ring buffer, the emulation delivers the packet via the
receive data ring.
Increasing the receive data ring descriptor size from 128 to 256
showed performance gains as high as 5% for packets smaller than 256.
Signed-off-by: Shraddha Joshi <jshraddha@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Acked-by: Boon Ang <bang@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
This patch provides a fix for PCI function level reset after an
ungraceful exit from an application. The fix is to enable internal
target read as part of device attach before getting device information
from device config space, device itself and shared memory. In addition
to that, add a 200ms delay for the recovery flow to complete.
Fixes: 540a211084a7 ("bnx2x: driver core")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Some values are interpreted without endian conversion and/or without
taking the proper mask into account.
Fixes: 5ef3b79fdfe6 ("net/bnxt: support flow filter ops")
Cc: stable@dpdk.org
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The number of queues provided by the application is not checked against
parser's supported maximum.
Fixes: 3d821d6fea40 ("net/mlx5: support RSS action flow rule")
Cc: stable@dpdk.org
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>