This patch adds loopback functionality used when the chip is a VF in order
to enable packet transmission between VFs and PF.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds hardware offloading support for IPV4, UDP and TCP checksum
verification, including inner/outer checksums on supported tunnel types.
It also restores packet type recognition support.
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds hardware offloading support for IPv4, UDP and TCP checksum
calculation, including inner/outer checksums on supported tunnel types.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds support for accessing the hardware directly when
handling Rx packets eliminating the need to use Verbs in the Rx data
path.
Rx scatter support: calculate the number of scatters on the fly
according to the maximum expected packet size.
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Modify PMD to send single-buffer packets directly to the device
bypassing the Verbs Tx post and poll routines.
Tx gather support: add support for transmitting packets spanning
over multiple buffers.
Take into consideration the amount of entries a packet occupies
in the TxQ when setting the report-completion flag of the chip.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Bring back support for automatic RSS with the default flow rules when not
in isolated mode. Balancing is done according to unspecified default
settings, as was the case before this entire rework.
Since the number of queues part of RSS contexts is limited to power of two
values, the number of configured queues is rounded down to its previous
power of two; extra queues are silently discarded. This does not prevent
dedicated flow rules from targeting them.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
When part of the RSS hash calculation, UDP packets are discarded (not
received on any queue) likely due to an issue with the kernel
implementation.
Temporarily disable UDP RSS support until this issue is resolved.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This patch dissociates single-queue indirection tables and hash QP objects
from Rx queue structures to relinquish their control to users through the
RSS flow rule action, while simultaneously allowing multiple queues to be
associated with RSS contexts.
Flow rules share identical RSS contexts (hashed fields, hash key, target
queues) to save on memory and other resources. The trade-off is some added
complexity due to reference counters management on RSS contexts.
The QUEUE action is re-implemented on top of an automatically-generated
single-queue RSS context.
The following hardware limitations apply to RSS contexts:
- The number of queues in a group must be a power of two.
- Queue indices must be consecutive, for instance the [0 1 2 3] set is
allowed, however [3 2 1 0], [0 2 1 3] and [0 0 1 1 2 3 3 3] are not.
- The first queue of a group must be aligned to a multiple of the context
size, e.g. if queues [0 1 2 3 4] are defined globally, allowed group
combinations are [0 1] and [2 3]; groups [1 2] and [3 4] are not
supported.
- RSS hash key, while configurable per context, must be exactly 40 bytes
long.
- The only supported hash algorithm is Toeplitz.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Device operation callbacks are not supposed to handle a missing private
data structure.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Work queues (WQs) are lower-level than standard queue pairs (QPs). They are
dedicated to one traffic direction and have to be used in conjunction with
indirection tables and special "hash" QPs to get the same level of
functionality.
These extra objects however are the building blocks for RSS support brought
by subsequent commits, as a single "hash" QP can manage several WQs through
an indirection table according to a hash algorithm and other parameters.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Since live Tx and Rx queues cannot be reused anymore without being
destroyed first, mbuf ring sizes are fixed and known from the start.
This allows a single allocation for queue data structures and mbuf ring
together, saving space and bringing them closer in memory.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
DPDK ensures that setup functions are never called on configured queues,
or only if they have previously been released.
PMDs therefore do not need to deal with the unexpected reconfiguration of
live queues which may fail with no easy way to recover. Dropping support
for this scenario greatly simplifies the code as allocation and setup steps
and checks can be merged.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Tx queue elements allocation function sets rte_errno properly and returns
its negative version. Reassigning this value to rte_errno is thus both
invalid and unnecessary.
Fixes: 9d14b27308 ("net/mlx4: standardize on negative errno values")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Although their "removed" version acts as a safety against unexpected bursts
while queues are being modified by the control path, these callbacks are
set per device instead of per queue. It makes sense to update them during
start/stop/close cycles instead of queue setup.
As a side effect, this commit addresses a bug left over from a prior
commit: bringing the link down causes the "removed" Tx callback to be used,
however the normal callback is not restored when bringing it back up,
preventing the application from sending traffic at all.
Updating callbacks for a link change is not necessary as bringing the
netdevice down is normally enough to prevent traffic from flowing in.
Fixes: 3f75a02719 ("net/mlx4: drop scatter/gather support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Implement promiscuous and all multicast through internal flow rules
automatically generated according to the configured mode.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Give users the ability to create flow rules that match all multicast
traffic. Like promiscuous flow rules, they come with restrictions such as
not allowing additional matching criteria.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This commit brings back VLAN filter configuration support without any
artificial limitation on the number of simultaneous VLANs that can be
configured (previously 127).
Also thanks to the fact it does not rely on fixed per-queue arrays for
potential Verbs flow handle storage anymore, this version wastes a lot less
memory (previously 128 * 127 * pointer size, i.e. 130 kiB per Rx queue,
only one of which actually had any use for this room: the RSS parent
queue).
The number of internal flow rules generated still depends on the number of
configured MAC addresses times that of configured VLAN filters though.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This commit brings back support for configuring up to 128 MAC addresses on
a port through internal flow rules automatically generated on demand.
Unlike its previous incarnation, the necessary extra flow rule for
broadcast traffic does not consume an entry from the MAC array anymore.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Since flow rule validation and creation have been refactored into a common
two-pass function, having separate callback functions to validate and
convert individual items seems redundant.
The purpose of these item validation functions is to reject partial masks
as those are not supported by hardware, before handing over the item to a
separate function that performs basic sanity checks.
The current approach and related code have the following issues:
- Lack of flow handle context in validation code requires kludges such as
the special treatment reserved to spec-less Ethernet pattern items.
- Lack of useful error reporting; users need as much help as possible to
understand what they did wrong, particularly when they hit hardware
limitations that aren't mentioned by the flow API. Preventing them from
going berserk after getting a generic "item not supported" message for no
apparent reason is mandatory.
- Generic checks should be performed by the caller, not by item-specific
validation functions.
- Mask checks either missing or too lax in some cases (Ethernet, VLAN).
This commit addresses all the above by combining validation and conversion
callbacks as "merge" callbacks that take an additional error context
parameter. Also:
- Support for source MAC address matching is removed as it has no effect.
- Providing an empty mask no longer bypasses the Ethernet specification
check that causes a rule to become promiscuous-like.
- VLAN VIDs must be matched exactly, as matching all VLAN traffic while
excluding non-VLAN traffic is not supported.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Since flow rules synchronization function mlx4_flow_sync() takes into
account the state of the device (whether it is started), trigger functions
mlx4_flow_start() and mlx4_flow_stop() are redundant. Standardize on
mlx4_flow_sync().
Use this opportunity to enhance this function with better error reporting
as the inability to start the device due to a problem with a flow rule
otherwise results in a nondescript error code.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Since both internal and user-defined flow rules are handled by a common
implementation, flow rule priority overlaps are easier to detect. No need
to restrict their use to isolated mode only.
With this patch, only the lowest priority level remains inaccessible to
users outside isolated mode.
Also, the PMD no longer automatically assigns a fixed priority level to
user-defined flow rules, which means collisions between overlapping rules
matching a different number of protocol layers at a given priority level
won't be avoided anymore (e.g. "eth" vs. "eth / ipv4 / udp").
As a reminder, the outcome of overlapping rules for a given priority level
was, and still is, undefined territory according to API documentation.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
When not in isolated mode, a flow rule is automatically configured by the
PMD to receive traffic addressed to the MAC address of the device. This
somewhat duplicates flow API functionality.
Remove legacy support for internal flow rules to instead handle them
through the flow API implementation.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Creating a flow rule targeting a missing (unconfigured) queue is not
possible. However, nothing really prevents the destruction of a queue with
existing flow rules still pointing at it, except currently the port must be
in a stopped state in order to avoid crashing.
Problem is that the port cannot be restarted if flow rules cannot be
re-applied due to missing queues. This flexibility will be needed by
subsequent work on this PMD.
Given that a PMD cannot decide on its own to remove problematic
user-defined flow rules in order to restart a port, work around this
restriction by making the affected ones drop-like, i.e. rules targeting
nonexistent queues drop packets instead.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Verbs QP and CQ resources for drop flow rules do not need to be permanently
allocated, only when at least one rule needs them.
Besides, struct rte_flow_drop is outside the mlx4 PMD name space and should
never have been defined there. struct rte_flow is currently the only
exception to this rule.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
These functions share a significant amount of code and require extra
internal objects to parse and build flow rule handles.
All this can be simplified by relying directly on the internal rte_flow
structure definition, whose QP pointer (destination Verbs queue) is
replaced by a DPDK queue ID and other properties, making it more versatile
without increasing its size (at least on 64-bit platforms).
This commit also gets rid of a few unnecessary debugging messages.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
These wrappers implement the ability to allocate room for several disparate
objects as a single contiguous allocation while complying with their
respective alignment constraints.
This is usually more efficient than allocating and freeing them
individually if they are not expected to be reallocated with rte_realloc().
A typical use case is when several objects that cannot be dissociated must
be allocated together, as shown in the following example:
struct b {
...
struct d *d;
}
struct a {
...
struct b *b;
struct c *c;
}
struct mlx4_malloc_vec vec[] = {
{ .size = sizeof(struct a), .addr = &ptr_a, },
{ .size = sizeof(struct b), .addr = &ptr_b, },
{ .size = sizeof(struct c), .addr = &ptr_c, },
{ .size = sizeof(struct d), .addr = &ptr_d, },
};
if (!mlx4_mallocv(NULL, vec, RTE_DIM(vec)))
goto error;
struct a *a = ptr_a;
a->b = ptr_b;
a->c = ptr_c;
a->b->d = ptr_d;
...
rte_free(a);
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Relying on rte_errno is not necessary where the return value of
rte_flow_error_set() can be used directly.
A related minor change is switching from RTE_FLOW_ERROR_TYPE_HANDLE to
RTE_FLOW_ERROR_TYPE_UNSPECIFIED when no rte_flow handle is involved in the
error, specifically when none is allocated yet.
This commit does not cause any functional change.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
- Remove unnecessary casts.
- Replace consecutive if/else blocks with switch statements.
- Use proper big endian definitions for mask values.
- Make end marker checks of item and action lists less verbose since they
are explicitly documented as being equal to 0.
- Remove unnecessary NULL check on action configuration structure.
This commit does not cause any functional change.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
In several instances, "items" refers either to a flow pattern or a single
item, and "actions" either to the entire list of actions or only one of
them.
The fact the target of a rule (struct mlx4_flow_action) is also named
"action" and item-processing objects (struct mlx4_flow_items) as "cur_item"
("token" in one instance) contributes to the confusion.
Use this opportunity to clarify related comments and remove the unused
valid_actions[] global, whose sole purpose is to be referred by
item-processing objects as "actions".
This commit does not cause any functional change.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This PMD supports up to 4096 flow rule priority levels (0 to 4095).
Applications were not allowed to use them until now due to overlaps with
the default flows (e.g. MAC address, promiscuous mode).
This is not an issue in isolated mode when such flows do not exist.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Add missing comments and fix those not Doxygen-friendly.
Since the private structure definition is modified, use this opportunity to
add one remaining missing include required by one of its fields
(sys/queue.h for LIST_HEAD()).
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
There is no benefit in having this as a separate function.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
rte_flow_error_set() is a convenient helper to initialize error objects.
Since there is no fundamental reason to prevent applications from using it,
expose it through the public interface after modifying its return value
from positive to negative. This is done for consistency with the rest of
the public interface.
Documentation is updated accordingly.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Correct two minor issues in the GSO programmer's guide:
- a note is rendered incorrectly in the middle of an unordered list;
this results in the remainder of the list appearing inside the note.
Correct indentation of the note to resolve same.
- two minor visual artifacts are present in the 'three-part-output-segment'
diagram. Remove same.
Fixes: f6010c7655 ("doc: add GSO programmer's guide")
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Increase port id range to 16 bits and remove the unnecessary cast.
Fixes: f8244c6399 ("ethdev: increase port id range")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
port_id in struct lio_device should be increased range to uint16_t since
port id in rte_eth_dev_data has already been defined as uint16_t.
Fixes: f8244c6399 ("ethdev: increase port id range")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Some features applied were still developed based on older version uint8_t
port_id, but port_id has been increased range to uint16_t. The patch fixes
the issue.
Fixes: f8244c6399 ("ethdev: increase port id range")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
vPMD tx does not set sw_ring's mbuf to NULL after free.
Therefore, in cases where the vector transmit function is in
use, we must use the appropriate index and threshold values
for the queue to only free the unreleased mbufs
Fixes: b4669bb950 ("i40e: add vector Tx")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Removed legacy writes to ORT/PIT registers from
i40e_GLQF_reg_init(struct i40e_hw *hw) function.
Latest NVM versions contain all relevant values
and these values should not be overwritten by SW to
maintain driver/firmware compatibility and to avoid
conflicts with dynamic device personalization profiles.
Fixes: f05ec7d77e ("i40e: initialize flow director flexible payload setting")
Cc: stable@dpdk.org
Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch adds a test for verifying the bitmap operations.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The librte_sched uses rte_bitmap to manage large arrays of bits in an
optimized method so, moving it to eal/common would allow other libraries
and applications to use it.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
A race condition can happen during parallel builds, where a header
might be installed in RTE_OUT/include before CFLAGS is recursively
expanded. This causes GCC to sometimes pick the header path as
SRCDIR/... and sometimes as RTE_OUT/include/... making the build
unreproducible, as the full path is used for the expansion of
__FILE__ and in the DWARF directory listing.
Installing all symlinks before all builds solves the problem. It is
still suboptimal, as the (fixed) path recorded in the DWARF dir
listing will include the user-configurable build output directory,
and thus will result in a different binary between different users
despite all other conditions being equal, but it is a simpler
approach that will anyway be obsolete once the build system is
switched to Meson.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
In order to achieve reproducible builds, always use the same
order when listing object files to build dependencies lists.
Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
In order to achieve reproducible builds, always use the same
order when listing files for compilation.
Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
In order to achieve fully reproducible builds, always use the same
inclusion order for headers in the Makefiles.
Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>