This patch separates Tx burst function implementations to different
source files, thus allowing them to compile in parallel.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch separates Tx function implementations to different source
file as an optional preparation step for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch moves Tx burst and its inline functions declarations to
header file to allow its use from several separate source files and as a
possible preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch separates Tx function declarations to different header file
in preparation for removing their implementation from the source file
and as an optional preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch separates Rx function implementations to different source
file as an optional preparation step for further consolidation of Rx
burst functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mlx5_rxtx.c file contains a lot of Tx burst functions, each of those
is performance-optimized for the specific set of requested offloads.
These ones are generated on the basis of the template function and it
takes significant time to compile, just due to a large number of giant
functions generated in the same file and this compilation is not being
done in parallel with using multithreading.
Therefore we can split the mlx5_rxtx.c file into several separate files
to allow different functions to be compiled simultaneously.
In this patch, we separate Rx function declarations to different header
file in preparation for removing them from the source file and as an
optional preparation step for further consolidation of Rx burst
functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The Rx metadata might use the metadata register C0 to keep the
values. The same register C0 might be used by kernel for source
vport value handling, kernel uses upper half of the register,
leaving the lower half for application usage.
In the extended metadata mode 1 (dv_xmeta_en devarg is
assigned with value 1) the metadata width is 16 bits only,
the Rx datapath code fetched the entire 32-bit value of the
metadata register and presented one to application. The patch
provides data masking depending on the chosen metadata mode.
Fixes: 6c55b622a9 ("net/mlx5: set dynamic flow metadata in Rx queues")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The issue occurred if mbuf starvation happened
in the middle of segmented packet reception.
In such a situation, after release the segments of
packet being received, code did not advance the
consumer index to the next stride. This caused
the receiving of the wrong segmented packet data.
The possible error scenario:
- we assume segs_n is 4 and we are receiving 4
segments of multi-segment packet.
- we fail to allocate mbuf while receiving the 3rd segment,
and this frees the mbufs of the packet chain we have built.
There are the 1st and 2nd segments in the chain.
- the 1st and the 2nd segments of this stride of Rx queue
are filled up (in elts array) with the new allocated
mbufs and their data are random (the 3rd and 4th
segments still contain the valid data of the packet though).
- on the next iteration of stride processing we get
the wrong two segments of the multi-segment packet.
Hence, we should skip these mbufs in the stride and
we should advance the consumer index on loop exit.
Fixes: 15a756b637 ("net/mlx5: fix possible NULL dereference in Rx path")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Zhu <zhujiawei12@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds support of the mbuf fast free offload to the
transmit datapath. This offload allows freeing the mbufs on
transmit completion in the most efficient way. It requires
the all mbufs were allocated from the same pool, have
the reference counter value as 1, and have no any externally
attached buffers.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The mlx5 PMD supports packet data inlining by pushing data
to the transmit descriptor. If packet is short enough and all
data are inline, the mbuf is not needed for data send anymore
and can be freed.
The mbuf free was performed in the most inner loop building
the transmit descriptors. This patch postpones the mbuf free
transaction to the tx_burst routine exit, optimizing the loop
and allowing the bulk freeing for the multiple mbufs in single
pool API call.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Using common function for CQ creation at rearm queue and clock queue.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Some Windows compilers consider static_assert() as calls to another
function rather than a compiler directive which allows checking type
information at compile time. This only occurs if the static_assert call
appears inside another function scope. To solve it move the
static_assert calls to global scope in the files where they are used.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The following assertion fails in case RTE_ENABLE_ASSERT is enabled:
PANIC in mlx5_tx_handle_completion():
assert "(txq->fcqs[txq->cq_ci & txq->cqe_m] >> 16)
== cqe->wqe_counter" failed
The free completion queue only contains an expected WQE counter if
RTE_LIBRTE_MLX5_DEBUG is enabled as well. Thus enabling
RTE_ENABLE_ASSERT alone causes the assert to fail.
Compile the assert conditionally only if RTE_ENABLE_ASSERT is enabled.
Fixes: 0afacb04f5 ("common/mlx5: remove NDEBUG")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Three bugs in rx_queue_count function:
- One entry may contain several segments, so 'used' must be multiplied
by number of segments per entry to properly reflect the queue usage.
- The number of cqes is equals to (1U << rxq->elts_n) - 1 in SPRQ mode.
The range returned by rx_queue_count should be the number of entries
used in queue, so it ranges from 0 to max number of entries
in queue, not this number minus one.
- For MPRQ mode, we need to take into account of the number of strd.
Fixes: 8788fec1f2 ("net/mlx5: implement descriptor status API")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The commit d2d5760552 ("net/mlx5: fix Rx queue count calculation") is
incorrect because the count calculation is wrong for the next cqe:
Example:
Compressed Set of packets 1 | Compressed Set of packets 2
C | a | e0 | e1 | e2 | e3 | e4 | e5 | C | a | e0
There are 2 compressed set of packets in the first queue. For the first
set, n is computed correctly.
But for the second, n is not computed properly. Because the zip context
is for the first set. The second set is not yet decompressed, so
there are no context.
To fix the issue, we should only use the zip context for the first CQEs
series.
Fixes: d2d5760552 ("net/mlx5: fix Rx queue count calculation")
Cc: stable@dpdk.org
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The number of descriptors configured is returned to a user
via the rxq_info_get API. This number is incorrect for MPRQ.
For SPRQ this number matches the number of mbufs allocated.
For MPRQ we have fewer external MPRQ buffers that can hold
multiple packets in strides of this big buffer. Take that
into account and return the number of MPRQ buffers multiplied
by the number of strides in this case.
Fixes: 26f1bae837 ("net/mlx5: add Rx/Tx burst mode info")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The Rx queue completion consumer index got temporary
wrong value pointing to the midst of the compressed CQE
session. If application crashed at the moment the next
queue restart caused handling wrong CQEs pointed by index
and losing consuming index synchronization, that made
reliable queue restart impossible.
Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
CQE compression allows us to save the PCI bandwidth and improve
the performance by compressing several CQEs together to a miniCQE.
But the miniCQE size is only 8 bytes and this limits the ability
to successfully keep the compression session in case of various
traffic patterns.
The current miniCQE format only keeps the compression session alive
in case of uniform traffic with the Hash RSS as the only difference.
There are requests to keep the compression session in case of tagged
traffic by RTE Flow Mark Id and mixed UDP/TCP and IPv4/IPv6 traffic.
Add 2 new miniCQE formats in order to achieve the best performance
for these traffic patterns: Flow Tag and Packet Header miniCQEs.
The existing rxq_cqe_comp_en devarg is modified to specify the
desired miniCQE format. Specifying 2 selects Flow Tag format
for better compression rate in case of RTE Flow Mark traffic.
Specifying 3 selects Checksum format (existing format for MPRQ).
Specifying 4 selects L3/L4 Header format for better compression
rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Only the regular rx_burst routine is updated to support split,
because the vectorized ones does not support scatter and MPRQ
does not support split at all.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
MPRQ (Multi-Packet Rx Queue) processes one packet at a time using
simple scalar instructions. MPRQ works by posting a single large buffer
(consisted of multiple fixed-size strides) in order to receive multiple
packets at once on this buffer. A Rx packet is then copied to a
user-provided mbuf or PMD attaches the Rx packet to the mbuf by the
pointer to an external buffer.
There is an opportunity to speed up the packet receiving by processing
4 packets simultaneously using SIMD (single instruction, multiple data)
extensions. Allocate mbufs in batches for every MPRQ buffer and process
the packets in groups of 4 until all the strides are exhausted. Then
switch to another MPRQ buffer and repeat the process over again.
The vectorized MPRQ burst routine is engaged automatically in case
the mprq_en=1 devarg is specified and the vectorization is not disabled
explicitly by providing rx_vec_en=0 devarg. There is a limitation:
LRO is not supported and scalar MPRQ is selected if it is on.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.
The dynamic offset and flag are stored in struct mlx5_rxq_data
to favor cache locality.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The functions rte_mbuf_dynfield_lookup() and rte_mbuf_dynflag_lookup()
can return an offset starting with 0 or a negative error code.
In reality the first offsets are probably reserved forever,
but for the sake of strict API compliance,
the checks which considered 0 as an error are fixed.
Fixes: efa79e68c8 ("net/mlx5: support fine grain dynamic flag")
Fixes: 3172c471b8 ("net/mlx5: prepare Tx queue structures to support timestamp")
Fixes: 0febfcce36 ("net/mlx5: prepare Tx to support scheduling")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Separate Rx state modification to the Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Tx object modification to the Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
There are a few discrepancies in the Rx queue count calculation.
The wrong index is used to calculate the number of used descriptors
in an Rx queue in case of the compressed CQE processing. The global
CQ index is used while we really need an internal index in a single
compressed session to get the right number of elements processed.
The total number of CQs should be used instead of the number of mbufs
to find out about the maximum number of Rx descriptors. These numbers
are not equal for the Multi-Packet Rx queue.
Allow the Rx queue count calculation for all possible Rx bursts since
CQ handling is the same for regular, vectorized, and multi-packet Rx
queues.
Fixes: 26f0488344 ("net/mlx5: support Rx queue count API")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Use C11 atomics with RELAXED ordering instead of the rte_atomic ops
which enforce unnecessary barriers on aarch64.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Since the 20.08 release deprecated rte_cio_*mb APIs because these APIs
provide the same functionality as rte_io_*mb APIs on all platforms, so
remove them and use rte_io_*mb instead.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The ConnectX NICs can transfer data from the host memory with two
approaches: provide the pointer to the data buffer, or do data inline
- copy the data to the transmit descriptor (WQE) entirely or only the
part of data. In some configurations the NIC hardware requires the
minimal data to be inline in the descriptor to operate correctly. And
there is the special dynamic flag to hint PMD not to inline the data
(for example, if buffer is located on some other device - storage or
GPU) on per packet basis.
If there was a packet with length shorter than the minimal inline data
length requested by the NIC hardware and the no-inline hint was set
the PMD tried to inline the packet with minimal required length
instead of actual packet's one. This patch adds the missed length
check into no-inline hint handling branch.
Fixes: cacb44a099 ("net/mlx5: add no-inline Tx flag")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Several source files include Verbs header files as in (1). These source
files will not compile under non-Linux operating systems. This commit
removes this inclusion in two cases:
Case 1: There is no usage of ibv_* or mlx5dv_* symbols in the source
file so the inclusion in (1) can be safely removed.
Case 2: Verbs symbols are used. Please note the inclusion in (1) already
appears in file linux/mlx5_glue.h (which represents the interface
to the rdma-core library). Therefore, replace (1) in the source file
with (2). Under non-Linux operating systems - file mlx5_glue.h will not
include (1).
(1)
#include <infiniband/verbs.h>
#include <infiniband/mlx5dv.h>
(2)
#include <mlx5_glue.h>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The ConnectX-6DX supports the timestamps in various formats,
the new realtime format is introduced - the upper 32-bit word
of timestamp contains the UTC seconds and the lower 32-bit word
contains the nanoseconds. This patch detects what format is
configured in the NIC and performs the conversion accordingly.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch adds send scheduling on timestamps into tx_burst
routine template. The feature is controlled by static configuration
flag, the actual routines supporting the new feature are generated
over this updated template.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The new static control flag is introduced to control
routine generating from template, enabling the scheduling
on timestamps.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
To provide the packet send schedule on mbuf timestamp the Tx
queue must be attached to the same UAR as Clock Queue is.
UAR is special hardware related resource mapped to the host
memory and provides doorbell registers, the assigning UAR
to the queue being created is provided via DevX API only.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The 'restrict' keyword is recognized in C99, while type qualifier
'__restrict' compiles ok in C with all language levels. This patch
is to replace the existing 'restrict' with '__rte_restrict' which
is a common wrapper supported by all compilers.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The TCP checksum includes IPV4 pseudo-header checksum and L3
payload checksum which include TCP header and TCP payload.
When mlx5 LRO is enabled, HW will calculate the TCP payload
checksum, PMD need complete the IPV4 pseudo-header checksum
and the TCP header checksum.
The mlx5_lro_update_tcp_hdr function completes the TCP header
checksum, but this function using lower 4 bits of data-offset
field in TCP header to get the whole TCP header length, this
will cause TCP header checksum wrong calculation.
Update the code using higher 4 bits of data-offset field
instead of lower 4 bits.
Fixes: e4c2a16eb1 ("net/mlx5: handle LRO packets in Rx queue")
Cc: stable@dpdk.org
Signed-off-by: Dong Zhou <dongz@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The Legacy MPW (multi-packet write) should not be engaged implicitly.
We should exclude this function from a Tx burst routine selection
process unless it is requested specifically by setting the txq_mpw_en
devarg. Exclude this function from the selection process the same way
it is done for the Enhanced MPW in the mlx5_select_tx_function()
routine.
Fixes: eb8121ab9d ("net/mlx5: introduce Tx burst routine template")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The assert that checks if there is a enough room for the
whole packet minus headroom data is written incorrectly.
The check should be negated in order to work properly.
Fixes: bd0d5930bf ("net/mlx5: enable MPRQ multi-stride operations")
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Using a global mbuf dynamic field for metadata incurs some
performance penalty on a datapath. Store this information in
the Rx queue descriptor for a better cache locality.
Fixes: a18ac61133 ("net/mlx5: add metadata support to Rx datapath")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Refactor common memory btree and cache management to common driver.
Replace some input parameters of MR APIs to more common data structure
like PD, port_id, share_cache,... so that multiple PMD drivers can
use those MR APIs.
Modify mlx5 net pmd driver to use MR management APIs from common driver.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Refactor common multi-process handling codes from net PMD to common
driver. Using tuple mp_id{name, port_id} as standard input parameter
for all multi-process IPC APIs instead of using rte_eth_dev.
Modify net PMD to use multi-process APIs from common driver.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The multi-stride operations now allow to reduce a stride size
while supporting Jumbo frames. That means that it is possible
to have mbufs configured with a size smaller than the whole
packet received. It is not an issue during normal MPRQ operations
since we attach external buffers instead of copying the data
into the mbuf itself. But it is not the case in "emergency mode"
when we have to copy every packet because of no more external
mbufs are available. Assemble a multi-segment packet to overcome
this issue in case scatter mode is enabled, drop a packet if not.
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
MPRQ feature should be updated to allow a packet to be received
into multiple strides in order to support the MTU exceeding 8KB.
Special care is needed to prevent the headroom corruption in the
multi-stride mode since the headroom space is borrowed by the PMD
from the tail of the preceding stride. Copy the whole packet into
a separate mbuf in this case or just the overlapping data if the
Rx scattering is supported by an application.
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
There is a non-optimal check if doorbell is needed present in the
mlx5_tx_handle_completion() function. Advancing a copy of the txq
consumer index and checking this copy with initial value causes
unnecessary memory loads and hurts the performance. It is better to
have a simple small boolean variable for this purpose. That allows
to eliminate all the excessive memory operations with the txq consumer
index and restore the performance of the tx completions.
Fixes: 1fd9af05e4 ("net/mlx5: update Tx error handling routine")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The devices of the family ConnectX may have two letters as suffix.
Such suffix is preceded with a space and the second x is lowercase:
- ConnectX-4 Lx
- ConnectX-5 Ex
- ConnectX-6 Dx
Uppercase of the device family name BlueField is also fixed.
The lists of supported devices are fixed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The routine sending packets with Multi-Packet Write method assigns
the wqe_last variable with transmit descriptor (WQE - work queue entry)
being built. If send queue is close to full state, the WQE has no data
yet (trying to put the first packet) and there is no enough space
in descriptor for the next packet the WQE is discarded and the stored
wqe_last value becomes invalid - points to the discarded WQE.
The mlx5_tx_burst_request_completion() routine might set the completion
request flags in the WQE pointed by wqe_last, it is safe, but the next
mlx5_tx_burst call uses the WQE as the first free one and request
completion flags might be overwritten and completion request will be
lost causing the transmit datapath malfunction.
Fixes: 8b581c690a ("net/mlx5: move Tx complete request routine")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
To provide the better PCIe bandwidth utilization the ConnectX-4LX
NIC supports the multi-packet write (MPW) sessions allowing to
pack multiple packets into one descriptor (WQE). This is legacy
feature and it has some limitations on the packets and data
description segments. To provide the best performance all inline
packets must be put into shared data segment and the total length
of MPW session must be limited. The limit is controlled with
txq_inline_mpw devarg.
Fixes: 82e75f8323 ("net/mlx5: fix legacy multi-packet Tx descriptors")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Get a burst mode information for Rx/Tx queues in mlx5.
Provide callback functions to show this information in
a "show rxq info" and "show txq info" output.
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>