33763 Commits

Author SHA1 Message Date
Gerry Gribbon
383049f179 regex/mlx5: forbid changing maximum match number
Added check so user gets error if they try to configure the
nb_max_matches value when using rte_regexdev_configure().

Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-10-09 16:33:32 +02:00
Gerry Gribbon
ab74680160 regex/mlx5: support combined ROF file
Added support to allow parsing of a combined ROF file to
locate compatible binary ROF data for the Bluefield hardware
being run on.

Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-10-09 16:27:19 +02:00
Gerry Gribbon
60ffb0d72f regex/mlx5: support stop on first match
Handle flag RTE_REGEX_OPS_REQ_STOP_ON_MATCH_F.

Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-10-09 15:16:34 +02:00
Gerry Gribbon
b6aceada08 app/regex: add match mode option
Allows to specify match mode to be used.

Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-10-09 15:11:58 +02:00
Gerry Gribbon
392ac62b73 app/regex: display response flags value
Allows application user to see response flags

Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-10-09 15:00:22 +02:00
Gerry Gribbon
70f1ea713f regexdev: add maximum number of mbuf segments
Allows application to query maximum number of mbuf segments that can
be chained together.

Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-10-09 14:54:30 +02:00
Shijith Thotton
b1ae367ab8 drivers: mark SW PMDs to support disabling IOVA as PA
Enabled software PMDs in IOVA as PA disabled build
as they work with IOVA as VA.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-09 13:14:57 +02:00
Shijith Thotton
6771216c2f drivers: mark cnxk to support disabling IOVA as PA
Enabled the flag pmd_supports_disable_iova_as_pa in cnxk driver build
files as they work with IOVA as VA. Updated cn9k and cn10k soc build
configurations to disable the IOVA as PA build by default.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-09 13:14:57 +02:00
Shijith Thotton
5812b32773 mbuf: move next pointer to first cache line if PA disabled
Swapped position of mbuf next pointer and second dynamic field (dynfield2)
if the build is configured to disable IOVA as PA.
This is to move the mbuf next pointer to first cache line.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-09 13:14:57 +02:00
Shijith Thotton
03b57eb7ab mbuf: add second dynamic field member
If IOVA as PA is disabled during build, mbuf physical address field is
undefined. This space is used to add the second dynamic field.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-09 13:14:57 +02:00
Shijith Thotton
a986c2b797 build: add option to configure IOVA mode as PA
IOVA mode in DPDK is either PA or VA.
The new build option enable_iova_as_pa configures the mode to PA
at compile time.
By default, this option is enabled.
If the option is disabled, only drivers which support it are enabled.
Supported driver can set the flag pmd_supports_disable_iova_as_pa
in its build file.

mbuf structure holds the physical (PA) and virtual address (VA).
If IOVA as PA is disabled at compile time, PA field (buf_iova)
of mbuf is redundant as it is the same as VA
and is replaced by a dummy field.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-09 13:14:52 +02:00
Shijith Thotton
396839128b test/dma: use function to get mbuf data IOVA address
Used rte_mbuf_data_iova API to get the physical address of mbuf data.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-09 11:37:41 +02:00
Shijith Thotton
e811e2d76f mbuf: add helper to get/set IOVA address
Added APIs rte_mbuf_iova_set and rte_mbuf_iova_get to set and get the
physical address of an mbuf respectively. Updated applications and
library to use the same.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-08 23:58:26 +02:00
Morten Brørup
a2833ecc5e mempool: fix get objects from mempool with cache
A flush threshold for the mempool cache was introduced in DPDK version
1.3, but rte_mempool_do_generic_get() was not completely updated back
then, and some inefficiencies were introduced.

Fix the following in rte_mempool_do_generic_get():

1. The code that initially screens the cache request was not updated
with the change in DPDK version 1.3.
The initial screening compared the request length to the cache size,
which was correct before, but became irrelevant with the introduction of
the flush threshold. E.g. the cache can hold up to flushthresh objects,
which is more than its size, so some requests were not served from the
cache, even though they could be.
The initial screening has now been corrected to match the initial
screening in rte_mempool_do_generic_put(), which verifies that a cache
is present, and that the length of the request does not overflow the
memory allocated for the cache.

This bug caused a major performance degradation in scenarios where the
application burst length is the same as the cache size. In such cases,
the objects were not ever fetched from the mempool cache, regardless if
they could have been.
This scenario occurs e.g. if an application has configured a mempool
with a size matching the application's burst size.

2. The function is a helper for rte_mempool_generic_get(), so it must
behave according to the description of that function.
Specifically, objects must first be returned from the cache,
subsequently from the backend.
After the change in DPDK version 1.3, this was not the behavior when
the request was partially satisfied from the cache; instead, the objects
from the backend were returned ahead of the objects from the cache.
This bug degraded application performance on CPUs with a small L1 cache,
which benefit from having the hot objects first in the returned array.
(This is probably also the reason why the function returns the objects
in reverse order, which it still does.)
Now, all code paths first return objects from the cache, subsequently
from the backend.

The function was not behaving as described (by the function using it)
and expected by applications using it. This in itself is also a bug.

3. If the cache could not be backfilled, the function would attempt
to get all the requested objects from the backend (instead of only the
number of requested objects minus the objects available in the backend),
and the function would fail if that failed.
Now, the first part of the request is always satisfied from the cache,
and if the subsequent backfilling of the cache from the backend fails,
only the remaining requested objects are retrieved from the backend.

The function would fail despite there are enough objects in the cache
plus the common pool.

4. The code flow for satisfying the request from the cache was slightly
inefficient:
The likely code path where the objects are simply served from the cache
was treated as unlikely. Now it is treated as likely.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
2022-10-08 22:52:51 +02:00
Hanumanth Pothula
4daf12f442 net/cnxk: support mulitiple mbuf pools per Rx queue
Presently, HW is programmed only to receive packets from LPB pool.
Making all packets received from LPB pool.

But, CNXK HW supports two pools,
 - SPB -> packets with smaller size (less than 4K)
 - LPB -> packets with bigger size (greater than 4K)

Patch enables multiple mempool capability, pool is selected based
on the packet's length. So, basically, PMD programs HW for receiving
packets from both SPB and LPB pools based on the packet's length.

This is achieved by enabling rx multiple mempool offload,
RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL. This allows the application to send
more than one pool(in our case two) to the driver, with different
segment(packet) lengths, which helps the driver to configure both
pools based on segment lengths.

This is often useful for saving the memory where the application
can create a different pool to steer the specific size of the
packet, thus enabling effective use of memory.

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
2022-10-08 22:37:45 +02:00
Hanumanth Pothula
458485eb21 ethdev: support multiple mbuf pools per Rx queue
Some of the HW has support for choosing memory pools based on the
packet's size.

This is often useful for saving the memory where the application
can create a different pool to steer the specific size of the
packet, thus enabling more efficient usage of memory.

For example, let's say HW has a capability of three pools,
 - pool-1 size is 2K
 - pool-2 size is > 2K and < 4K
 - pool-3 size is > 4K
Here,
        pool-1 can accommodate packets with sizes < 2K
        pool-2 can accommodate packets with sizes > 2K and < 4K
        pool-3 can accommodate packets with sizes > 4K

With multiple mempool capability enabled in SW, an application may
create three pools of different sizes and send them to PMD. Allowing
PMD to program HW based on the packet lengths. So that packets with
less than 2K are received on pool-1, packets with lengths between 2K
and 4K are received on pool-2 and finally packets greater than 4K
are received on pool-3.

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-10-08 18:39:02 +02:00
Andrew Rybchenko
b7fc7c5366 ethdev: factor out helper function to check Rx mempool
Avoid Rx mempool checks duplication logic.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-10-08 18:34:55 +02:00
Dariusz Sosnowski
3730502219 app/flow-perf: add hairpin queue memory config
This patch adds the hairpin-conf command line parameter to flow-perf
application. hairpin-conf parameter takes a hexadecimal bitmask with
bits having the following meaning:

- Bit 0 - Force memory settings of hairpin RX queue.
- Bit 1 - Force memory settings of hairpin TX queue.
- Bit 4 - Use locked device memory for hairpin RX queue.
- Bit 5 - Use RTE memory for hairpin RX queue.
- Bit 8 - Use locked device memory for hairpin TX queue.
- Bit 9 - Use RTE memory for hairpin TX queue.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
2022-10-08 18:30:50 +02:00
Dariusz Sosnowski
230951555f app/testpmd: add hairpin queue memory modes
This patch extends hairpin-mode command line option of test-pmd
application with an ability to configure whether Rx/Tx hairpin queue
should use locked device memory or RTE memory.

For purposes of this configurations the following bits of 32 bit
hairpin-mode are reserved:

- Bit 8 - If set, then force_memory flag will be set for hairpin RX
  queue.
- Bit 9 - If set, then force_memory flag will be set for hairpin TX
  queue.
- Bits 12-15 - Memory options for hairpin Rx queue:
    - Bit 12 - If set, then use_locked_device_memory will be set.
    - Bit 13 - If set, then use_rte_memory will be set.
    - Bit 14 - Reserved for future use.
    - Bit 15 - Reserved for future use.
- Bits 16-19 - Memory options for hairpin Tx queue:
    - Bit 16 - If set, then use_locked_device_memory will be set.
    - Bit 17 - If set, then use_rte_memory will be set.
    - Bit 18 - Reserved for future use.
    - Bit 19 - Reserved for future use.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
2022-10-08 18:30:50 +02:00
Dariusz Sosnowski
f2d43ff54d net/mlx5: allow hairpin Rx queue in locked memory
This patch adds a capability to place hairpin Rx queue in locked device
memory. This capability is equivalent to storing hairpin RQ's data
buffers in locked internal device memory.

Hairpin Rx queue creation is extended with requesting that RQ is
allocated in locked internal device memory. If allocation fails and
force_memory hairpin configuration is set, then hairpin queue creation
(and, as a result, device start) fails. If force_memory is unset, then
PMD will fallback to allocating memory for hairpin RQ in unlocked
internal device memory.

To allow such allocation, the user must set HAIRPIN_DATA_BUFFER_LOCK
flag in FW using mlxconfig tool.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-08 18:30:50 +02:00
Dariusz Sosnowski
7274b41756 net/mlx5: allow hairpin Tx queue in host memory
This patch adds a capability to place hairpin Tx queue in host memory
managed by DPDK. This capability is equivalent to storing hairpin SQ's
WQ buffer in host memory.

Hairpin Tx queue creation is extended with allocating a memory buffer of
proper size (calculated from required number of packets and WQE BB size
advertised in HCA capabilities).

force_memory flag of hairpin queue configuration is also supported.
If it is set and:

- allocation of memory buffer fails,
- or hairpin SQ creation fails,

then device start will fail. If it is unset, PMD will fallback to
creating the hairpin SQ with WQ buffer located in unlocked device
memory.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-08 18:30:50 +02:00
Dariusz Sosnowski
f9fe5a5bb1 common/mlx5: add hairpin RQ buffer type capabilities
This patch adds new HCA capability related to hairpin RQs. This new
capability, hairpin_data_buffer_locked, indicates whether HCA supports
locking data buffer of hairpin RQ in ICMC (Interconnect Context Memory
Cache).

Struct used to define RQ configuration (RQ context) is extended with
hairpin_data_buffer_type field, which configures data buffer for hairpin
RQ. It can take the following values:

- MLX5_RQC_HAIRPIN_DATA_BUFFER_TYPE_UNLOCKED_INTERNAL_BUFFER - hairpin
  RQ's data buffer is stored in unlocked memory in ICMC.
- MLX5_RQC_HAIRPIN_DATA_BUFFER_TYPE_LOCKED_INTERNAL_BUFFER - hairpin
  RQ's data buffer is stored in locked memory in ICMC.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-08 18:22:18 +02:00
Dariusz Sosnowski
e58c372d76 common/mlx5: add hairpin SQ buffer type capabilities
This patch extends HCA_CAP and SQ Context structs available in PRM. This
fields allow checking if NIC supports storing hairpin SQ's WQ buffer in
host memory and configuring such memory placement.

HCA capabilities are extended with the following fields:

- hairpin_sq_wq_in_host_mem - If set, then NIC supports using host
memory as a backing storage for hairpin SQ's WQ buffer.
- hairpin_sq_wqe_bb_size - Indicates the required size of SQ WQE basic
block.

SQ Context is extended with hairpin_wq_buffer_type which informs
NIC where SQ's WQ buffer will be stored. This field can take the
following values:

- MLX5_SQC_HAIRPIN_WQ_BUFFER_TYPE_INTERNAL_BUFFER - WQ buffer will be
  stored in unlocked device memory.
- MLX5_SQC_HAIRPIN_WQ_BUFFER_TYPE_HOST_MEMORY - WQ buffer will be stored
  in host memory. Buffer is provided by PMD.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-08 18:22:08 +02:00
Dariusz Sosnowski
bc705061cb ethdev: introduce hairpin memory capabilities
Before this patch, implementation details and configuration of hairpin
queues were decided internally by the PMD. Applications had no control
over the configuration of Rx and Tx hairpin queues, despite number of
descriptors, explicit Tx flow mode and disabling automatic binding.
This patch addresses that by adding:

- Hairpin queue capabilities reported by PMDs.
- New configuration options for Rx and Tx hairpin queues.

Main goal of this patch is to allow applications to provide
configuration hints regarding placement of hairpin queues.
These hints specify whether buffers of hairpin queues should be placed
in host memory or in dedicated device memory. Different memory options
may have different performance characteristics and hairpin configuration
should be fine-tuned to the specific application and use case.

This patch introduces new hairpin queue configuration options through
rte_eth_hairpin_conf struct, allowing to tune Rx and Tx hairpin queues
memory configuration. Hairpin configuration is extended with the
following fields:

- use_locked_device_memory - If set, PMD will use specialized on-device
  memory to store RX or TX hairpin queue data.
- use_rte_memory - If set, PMD will use DPDK-managed memory to store RX
  or TX hairpin queue data.
- force_memory - If set, PMD will be forced to use provided memory
  settings. If no appropriate resources are available, then device start
  will fail. If unset and no resources are available, PMD will fallback
  to using default type of resource for given queue.

If application chooses to use PMD default memory configuration, all of
these flags should remain unset.

Hairpin capabilities are also extended, to allow verification of support
of given hairpin memory configurations. Struct rte_eth_hairpin_cap is
extended with two additional fields of type rte_eth_hairpin_queue_cap:

- rx_cap - memory capabilities of hairpin RX queues.
- tx_cap - memory capabilities of hairpin TX queues.

Struct rte_eth_hairpin_queue_cap exposes whether given queue type
supports use_locked_device_memory and use_rte_memory flags.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
2022-10-08 18:22:01 +02:00
Zhichao Zeng
70888c61d9 net/iavf: fix tunnel TSO offload
This patch is to fix the tunnel TSO not enabling issue, simplify
the logic of calculating 'Tx Buffer Size' of data descriptor with IPSec,
and fix handling that the mbuf size exceeds the TX descriptor
hardware limit(1B-16KB) which causes malicious behavior to the NIC.

Fixes: 1e728b01120c ("net/iavf: rework Tx path")

Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2022-10-08 18:04:46 +02:00
Qi Zhang
27b7bdae1d net/ice: fix DDP package init
ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED and
ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED should not be treated as
a DDP package init failure. Use ice_is_init_pkg_successful
to check return value of ice_copy_and_init_pkg.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Zhimin Huang <zhiminx.huang@intel.com>
2022-10-08 16:23:14 +02:00
Ciara Loftus
1eb1846b1a net/af_xdp: make compatible with libbpf 0.8.0
libbpf v0.8.0 deprecates the bpf_get_link_xdp_id() and
bpf_set_link_xdp_fd() functions. Use meson to detect if
bpf_xdp_attach() is available and if so, use the recommended
replacement functions bpf_xdp_query_id(), bpf_xdp_attach()
and bpf_xdp_detach().

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-07 19:37:24 +02:00
Andrew Rybchenko
5ff3dbe6ce net/af_xdp: add log on XDP program removal failures
Make it visible in logs if something goes wrong on XDP program
removal failure.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-07 19:37:24 +02:00
Andrew Rybchenko
0ed0bc3834 net/af_xdp: avoid version-based check for program load
Version-based checks are bad. It is better to check for required
functions. Check for bpf_object__next_program() in this case since
it appears last in libbpf among functions used to load program
without bpf_prog_load() which is deprecated in libbpf v0.7.0.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-07 19:37:24 +02:00
Andrew Rybchenko
e024c7e838 net/af_xdp: avoid version-based check for shared UMEM
Check for xsk_socket__create_shared() function instead.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-07 19:37:24 +02:00
Andrew Rybchenko
f76dc44ded net/af_xdp: make clear which libxdp version is required
Include checked libxdp version in driver build skip reason.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-07 19:37:24 +02:00
Andrew Rybchenko
50b855fc47 net/af_xdp: move XDP library presence flag setting
RTE_NET_AF_XDP_LIBXDP is a conditional to include xdp/xsk.h and should
be set as soon as we know that the header is present.
RTE_NET_AF_XDP_SHARED_UMEM is one of conditions to use
xsk_socket__create_shared().
Both do not depend on libbpf and bpf/bpf.h presence.

Since else branch below returns error, there is no functional changes,
just style which will help on further rework.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-07 19:37:24 +02:00
Gagandeep Singh
0bf99a02cc net/dpaa: fix buffer freeing in slow path
If there is any error in packet or taildrop feature is enabled,
HW can reject those packets and put them in error queue. Driver
poll this error queue to free the buffers.
DPAA driver has an issue while freeing these rejected buffers. In
case of scatter gather packets, it is preparing the mbuf SG list
by scanning the HW descriptors and once the mbuf SG list prepared,
it free only first segment of the mbuf SG list by calling the
API rte_pktmbuf_free_seg(), This will leak the memory of other
segments and mempool can be empty.

Also there is one more issue, external buffer's memory may not
belong to mempool so driver itself free the external buffer
after successfully send the packet to HW to transmit instead of
let the HW to free it. So transmit function free all the external
buffers. But driver has no check for external buffers
while freeing the rejected buffers and this can do double free the
memory which can corrupt the user pool and crashes and undefined
behaviour of system can be seen.

This patch fixes the above mentioned issue by checking each
and every segment and freeing all the segments except external.

Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
8716c0ec06 net/dpaa: fix buffer freeing on SG Tx
When using SG list to TX with external and direct buffers,
HW free direct buffers and driver free external buffers.

Software scans the complete SG mbuf list to find the external
buffers to free, but this is wrong as hardware can free the
direct buffers if any present in the list and same can be
re-allocated for other purpose in multi thread or high speed
running traffic environment with new data in it. So the software
which is scanning the SG mbuf list, if that list has any direct
buffer present then that direct buffer's next pointer can give
wrong pointer value, if already freed by hardware which
can do the mempool corruption or memory leak.

In this patch instead of relying on user given SG mbuf list
we are storing the buffers in an internal list which will
be scanned by driver after transmit to free non-direct
buffers.

This patch also fixes below issues.

Driver is freeing complete SG list by checking external buffer
flag in first segment only, but external buffer can be attached
to any of the segment. Because of this, driver either can double
free buffers or there can be memory leak.

In case of indirect buffers, driver is modifying the original
buffer list to free the indirect buffers but this original buffer
list is being used by driver even after transmit packets for
non-direct buffer cleanup. This can cause the buffer leak issue.

Fixes: f191d5abda54 ("net/dpaa: support external buffers in Tx")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
c82b17b780 bus/dpaa: move mempool registration before probing
moving the mempool ops registration before DPAA
devices probe so that device probe functions can
also be able to use mempool operations.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
533c31cc83 net/dpaa: use internal mempool for SG table
Creating and using driver's mempool for
allocating the SG table memory required for
FD creation.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Rohit Raj
b585ecb54a bus/dpaa: change interface name passing in IOCTL
Due to change in latest kernel, passing the interface name to
kernel through IOCTL as string instead of character pointer.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
afda343226 doc: add kernel version information in DPAA guide
DPAA driver has dependency on kernel to perform various functionalities.
So kernel and DPDK version should be compatible for proper working.

This patch updates the DPAA guide with the information that user can
refer to find the compatible kernel version.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
2022-10-07 17:19:03 +02:00
Rohit Raj
65afdda04b net/dpaa: fix jumbo packet Rx in case of VSP
For packet length of size more than 2K bytes, segmented packets were
being received in DPDK even if mbuf size was greater than packet
length. This is due to the configuration in VSP.

This patch fixes the issue by configuring the VSP according to the
mbuf size configured during mempool configuration.

Fixes: e4abd4ff183c ("net/dpaa: support virtual storage profile")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Rohit Raj
79711846f6 bus/fslmc: add timeout in MC send command
Adding one second timeout in MC send command API to ensure it doesn't
gets stuck in case of failure.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
b0074a7ba1 net/dpaa2: fix buffer freeing on SG Tx
When using SG list to TX with external and direct buffers,
HW free the direct buffers and driver free the external buffers.

Software scans the complete SG mbuf list to find the external
buffers to free, but this is wrong as hardware can free the
direct buffers if any present in the list and same can be
re-allocated for other purpose in multi thread or high speed
running traffic environment with new data in it. So the software
which is scanning the SG mbuf list, if that list has any direct
buffer present then that direct buffer's next pointer can give
wrong pointer value, if already freed by hardware which
can do the mempool corruption or memory leak.

In this patch instead of relying on user given SG mbuf list
we are storing the buffers in an internal list which will
be scanned by driver after transmit to free non-direct
buffers.

This patch also fixes 2 more memory leak issues.

Driver is freeing complete SG list by checking external buffer
flag in first segment only, but external buffer can be attached
to any of the segment. Because of which driver either can double
free buffers or there can be memory leak.

In case of indirect buffers, driver is modifying the original
buffer list to free the indirect buffers but this original buffer
list is being used even after transmit packets for software
buffer cleanup. This can cause the buffer leak issue.

Fixes: 6bfbafe18d15 ("net/dpaa2: support external buffers in Tx")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
75e2a1d473 net/dpaa2: use internal mempool for SG table
Creating and using driver's mempool for
allocating the SG table memory required for
FD creation instead of relying on user mempool.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Gagandeep Singh
e7524271c3 net/dpaa: support ESP type in packet parsing
Add support of ESP packet type in packet receive path.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Brick Yang
fb2790a535 net/dpaa2: check free enqueue descriptors before Tx
Check if there exists free enqueue descriptors before enqueuing Tx
packet. Also try to free enqueue descriptors in case they are not
free.

Fixes: ed1cdbed6a15 ("net/dpaa2: support multiple Tx queues enqueue for ordered")
Cc: stable@dpdk.org

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Vanshika Shukla
e45956ce18 net/dpaa2: fix DPDMUX error behaviour
Driver is giving the wrong interface ID while setting the
error behaviour.

This patch fixes the issue by passing the correct MAC interface
index value to the API.

Fixes: 3d43972b1b42 ("net/dpaa2: do not drop parse error packets by dpdmux")
Cc: stable@dpdk.org

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Apeksha Gupta
fe10f6cc1c net/enetfec: fix buffer leak
Driver has no proper handling to free unused
allocated mbufs in case of error or when the rx
processing complete because of which mempool
can be empty after some time.

This patch fixes this issue by moving the buffer
allocation code to the right place in driver.

Fixes: ecae71571b0d ("net/enetfec: support Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:03 +02:00
Apeksha Gupta
d64e9cfe97 net/enetfec: fix restart
Queue reset is missing in restart because of which
IO cannot work on device restart.

This patch fixes the issue by resetting the queues on
device restart.

Fixes: b84fdd39638b ("net/enetfec: support UIO")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:02 +02:00
Vanshika Shukla
05500852af bus/dpaa: open QMAN interrupt file as non-blocking
This patch sets qman portal file descriptors used for
interrupts IO processing in non-blocking mode to avoid
any unwanted blocks while IO operations over the FD.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2022-10-07 17:19:02 +02:00
Jerin Jacob
6b81dddbb9 ethdev: support congestion management
NIC HW controllers often come with congestion management support on
various HW objects such as Rx queue depth or mempool queue depth.

Also, it can support various modes of operation such as RED
(Random early discard), WRED etc on those HW objects.

Add a framework to express such modes(enum rte_cman_mode) and
introduce (enum rte_eth_cman_obj) to enumerate the different
objects where the modes can operate on.

Add RTE_CMAN_RED mode of operation and RTE_ETH_CMAN_OBJ_RX_QUEUE,
RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL objects.

Introduce reserved fields in configuration structure
backed by rte_eth_cman_config_init() to add new configuration
parameters without ABI breakage.

Add rte_eth_cman_info_get() API to get the information such as
supported modes and objects.

Add rte_eth_cman_config_init(), rte_eth_cman_config_set() APIs
to configure congestion management on those object with associated mode.

Finally, add rte_eth_cman_config_get() API to retrieve the
applied configuration.

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Sunil Kumar Kori <skori@marvell.com>
2022-10-07 11:50:28 +02:00
Dongdong Liu
fbb7a43a36 net/hns3: support Rx/Tx descriptor dump
This patch support query HW descriptor from hns3 device. HW descriptor
is also called BD (buffer description) which is shared memory between
software and hardware.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
2022-10-06 18:38:48 +02:00