Commit Graph

29007 Commits

Author SHA1 Message Date
Kalesh AP
89e6a0c0da net/bnxt: update HSI structure
- HWRM version updated to 1.10.2.44
- Added corresponding driver changes for the Admin MTU field name change.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-07-14 20:29:05 +02:00
Weifeng Li
8117f5f61a net/bnxt: fix nested lock during bonding
Bnxt PMD registers LSC callback (bond_ethdev_lsc_event_callback) when
working at bond mode. This callback will dead lock when LSC
interrupt triggered.

lsc interrupt ->
bnxt_handle_async_event ->
bnxt_link_update_op ->
bond_ethdev_lsc_event_callback (lsc_lock) ->
bnxt_link_update_op ->
bond_ethdev_lsc_event_callback (lsc_lock dead lock)

Fixes: c2faa1d196 ("net/bnxt: add support for LSC interrupt event")
Cc: stable@dpdk.org

Signed-off-by: Weifeng Li <liweifeng96@126.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-07-13 06:19:11 +02:00
Lance Richardson
5ed30db87f net/bnxt: fix missing barriers in completion handling
Ensure that Rx/Tx/Async completion entry fields are accessed
only after the completion's valid flag has been loaded and
verified. This is needed for correct operation on systems that
use relaxed memory consistency models.

Fixes: 2eb53b134a ("net/bnxt: add initial Rx code")
Fixes: 6eb3cc2294 ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
2021-07-12 20:38:12 +02:00
Satheesh Paul
1c3b657a6a net/cnxk: support raw flow pattern
Add support for rte_flow_item_raw to parse custom L2 and L3
protocols.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-07-13 12:19:22 +02:00
Satheesh Paul
612ce5cf7d common/cnxk: support custom L2/L3 protocols parsing
Add roc API for parsing custom L2 and L3 protocols.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Reviewed-by: Kiran Kumar K <kirankumark@marvell.com>
2021-07-13 12:17:51 +02:00
Satha Rao
e7bbbcb26f net/cnxk: update link status when device stopped
Set link status to down and don't fetch link status from kernel
when device in stopped state.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-07-13 11:29:49 +02:00
Satha Rao
d9dda782ac net/octeontx2: fix TM node statistics query
Until hierarchy committed TM hardware resources are not allocated
for node.
This patch check for status of HW resources before reading statistics.

Fixes: 1e25d57fae ("net/octeontx2: add TM stats and shaper profile")
Cc: stable@dpdk.org

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-07-13 11:29:11 +02:00
Satha Rao
12e491a6b6 net/octeontx2: handle link status when device stopped
Set link status to down and don't fetch link status from kernel
when device in stopped state.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-07-13 11:29:10 +02:00
Satheesh Paul
d81cea5280 net/cnxk: fix default MCAM allocation size
Preallocation of MCAM entries is not valid anymore since the
AF side MCAM allocation scheme has changed. This patch disables
preallocation by changing the default MCAM preallocation size
from 8 to 1.

Fixes: 168c59cfe4 ("net/octeontx2: add flow MCAM utility functions")
Cc: stable@dpdk.org

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-07-12 14:44:20 +02:00
Anoob Joseph
ec8f303c65 net/octeontx2: support non-ethernet L2 header
In the inline inound path, a custom header would be present at L3 which
has sequence number & SPI. L2 need to be adjusted such that the eventual
packet would have L3 after L2. Remove assumption of L2 type in this
handling.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-07-12 14:04:42 +02:00
Meir Levi
71c5085bfb net/mvpp2: fix not supported VLAN operations status
vlan_strip and vlan_extend features need to return "unsupported"
error value.

Fixes: ff0b8b10dc ("net/mvpp2: support VLAN offload")
Cc: stable@dpdk.org

Signed-off-by: Meir Levi <mlevi4@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
2021-07-12 11:07:08 +02:00
Dana Vardi
e622c1a88e net/mvpp2: fix configured state dependency
Need to set configure flag to allow create and commit mrvl tm
hierarchy tree. tm configuration depends on parameters that are
being set in port configure stage, e.g. nb_tx_queues.
This also aligned with the tm api description.

Fixes: 429c394417 ("net/mvpp2: support traffic manager")
Cc: stable@dpdk.org

Signed-off-by: Dana Vardi <danat@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
2021-07-12 10:31:21 +02:00
Dana Vardi
8fa07a68a6 net/mvpp2: fix port speed overflow
ethtool_cmd_speed return uint32 and after the arithmetic
operation in mrvl_get_max_rate func the result is out of range.

Fixes: 429c394417 ("net/mvpp2: support traffic manager")
Cc: stable@dpdk.org

Signed-off-by: Dana Vardi <danat@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
2021-07-12 09:59:52 +02:00
Sarosh Arif
6e695b0cda net/mlx5: fix typo in vectorized Rx comments
Change "returing" to "returning".

Fixes: 2e542da709 ("net/mlx5: add Altivec Rx")
Fixes: 570acdb1da ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Fixes: 3c2ddbd413 ("net/mlx5: separate shareable vector functions")
Cc: stable@dpdk.org

Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
2021-07-15 16:32:09 +02:00
Alexander Kozyrev
acc8747953 net/mlx5: fix threshold for mbuf replenishment in MPRQ
The replenishment scheme for the vectorized MPRQ Rx burst aims
to improve the cache locality by allocating new mbufs only when
there are almost no mbufs left: one burst gap between allocated
and consumed indexes.

This gap is not big enough to accommodate a corner case when we
have a very aggressive CQE compression with multiple regular CQEs
at the beginning and 64 zipped CQEs at the end.

Need to keep in mind this case and extend the replenishment
threshold by MLX5_VPMD_RX_MAX_BURST (64) to avoid mbuf overflow.

Fixes: 5fc2e5c27d ("net/mlx5: fix mbuf overflow in vectorized MPRQ")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-15 16:22:27 +02:00
Xiaoyu Min
0ed93c1344 net/mlx5: fix missing RSS expansion of IPv6 frag
IPV6_FRAG_EXT item is missed for RSS expansion which causes wrongly
expanded flows:
flow create 0 ingress pattern eth / ipv6 / udp dst is 250 / vxlan-gpe /
ipv6 / ipv6_frag_ext / end actions rss level 2 types ip end / end

Different from other items, IPV6_FRAG_EXT hasn't next field because HW
only support to do hash of UDP/TCP for non-fragment.

This MLX5_EXPANSION_IPV6_FRAG_EXT node in RSS expansion graph only helps
RSS expansion function to locate right node in graph from which start
to expand.

Fixes: 0e5a0d8f75 ("net/mlx5: support match on IPv6 fragment extension")
Cc: stable@dpdk.org

Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:22:26 +02:00
Xiaoyu Min
1c4f7044c6 net/mlx5: fix missing RSS expandable items
Some RSS expandable items are missing which leads to the expanded
rte flow rules with wrong patterns.

Fix by adding missed items.

Fixes: d91093b9a2 ("net/mlx5: fix RSS pattern expansion")
Cc: stable@dpdk.org

Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:22:25 +02:00
Gregory Etelson
c410e1d562 net/mlx5: support flow matchng on IPv4 IHL
Query MLX5 port hardware if it is capable to offload IPv4
IHL field.

Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version required to offload IPv4 IHL is
xx_30_2000.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-15 16:22:20 +02:00
Suanming Mou
9e22b859cd doc: add multi-thread flow rate optimizations for mlx5
This commit adds the multiple-thread flow insertion optimization
description.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:16:18 +02:00
Suanming Mou
a5835d530f net/mlx5: optimize Rx queue match
As hrxq struct has the indirect table pointer, while matching the
hrxq, better to use the hrxq indirect table instead of searching
from the list.

This commit optimizes the hrxq indirect table matching.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:23 +02:00
Suanming Mou
cde19e8634 net/mlx5: change memory release configuration
This commit changes the index pool memory release configuration
to 0 when memory reclaim mode is not required.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:22 +02:00
Suanming Mou
f3020a331d net/mlx5: optimize hash list table allocate on demand
Currently, all the hash list tables are allocated during start up.
Since different applications may only use dedicated limited actions,
optimized the hash list table allocate on demand will save initial
memory.

This commit optimizes hash list table allocate on demand.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:22 +02:00
Suanming Mou
07b51bb9fe net/mlx5: enable indexed pool per-core cache
This commit enables the tag and header modify action indexed
pool per-core cache in non-reclaim memory mode.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:21 +02:00
Suanming Mou
f7c3f3c290 net/mlx5: adjust hash bucket size
With the new per core optimization to the list, the hash bucket size
can be tuned to a more accurate number.

This commit adjusts the hash bucket size.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:21 +02:00
Matan Azrad
4f3d8d0ea3 net/mlx5: move header modify allocator to ipool
Modify header actions are allocated by mlx5_malloc which has a big
overhead of memory and allocation time.

One of the action types under the modify header object is SET_TAG,

The SET_TAG action is commonly not reused by the flows and each flow has
its own value.

Hence, the mlx5_malloc becomes a bottleneck in flow insertion rate in
the common cases of SET_TAG.

Use ipool allocator for SET_TAG action.

Ipool allocator has less overhead of memory and insertion rate and has
better synchronization mechanism in multithread cases.

Different ipool is created for each optional size of modify header
handler.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 16:09:20 +02:00
Suanming Mou
7e1cf89271 common/mlx5: support list non-lcore operations
This commit supports the list non-lcore operations with
an extra sub-list and lock.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:20 +02:00
Suanming Mou
9a4c368807 common/mlx5: optimize cache list object memory
Currently, hash list uses the cache list as bucket list. The list
in the buckets have the same name, ctx and callbacks. This wastes
the memory.

This commit abstracts all the name, ctx and callback members in the
list to a constant struct and others to the inconstant struct, uses
the wrapper functions to satisfy both hash list and cache list can
set the list constant and inconstant struct individually.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:19 +02:00
Suanming Mou
25481e5025 common/mlx5: allocate cache list memory individually
Currently, the list's local cache instance memory is allocated with
the list. As the local cache instance array size is RTE_MAX_LCORE,
most of the cases the system will only have very limited cores.
allocate the instance memory individually per core will be more
economic to the memory.

This commit changes the instance array to pointer array, allocate
the local cache memory only when the core is to be used.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:19 +02:00
Matan Azrad
961b6774c4 common/mlx5: add per-lcore cache to hash list utility
Using the mlx5 list utility object in the hlist buckets.

This patch moves the list utility object to the common utility, creates
all the clone operations for all the hlist instances in the driver.

Also adjust all the utility callbacks to be generic for both list and
hlist.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 16:09:18 +02:00
Suanming Mou
6507c9f51d common/mlx5: call list callbacks with context
This commit optimizes to call the list callback functions with global
context directly.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 16:09:17 +02:00
Suanming Mou
d03b786005 common/mlx5: add per-lcore sharing flag in object list
Without lcores_share flag, mlx5 PMD was sharing the rdma-core objects
between all lcores.

Having lcores_share flag disabled, means each lcore will have its own
objects, which will eventually lead to increased insertion/deletion
rates.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:50:31 +02:00
Suanming Mou
9c373c524b common/mlx5: move list utility from net driver
Hash list is planned to be implemented with the cache list code.

This commit moves the list utility to common directory.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:19:13 +02:00
Matan Azrad
679f46c775 net/mlx5: allocate list memory in create function
Currently, the list memory was allocated by the list API caller.

Move it to be allocated by the create API in order to save consistence
with the hlist utility.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:13 +02:00
Matan Azrad
84fbba5b9e net/mlx5: relax list utility atomic operations
The atomic operation in the list utility no need a barriers because the
critical part are managed by RW lock.

Relax them.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:12 +02:00
Matan Azrad
a603b55ad9 net/mlx5: manage list cache entries release
When a cache entry is allocated by lcore A and is released by lcore B,
the driver should synchronize the cache list access of lcore A.

The design decision is to manage a counter per lcore cache that will be
increased atomically when the non-original lcore decreases the reference
counter of cache entry to 0.

In list register operation, before the running lcore starts a lookup in
its cache, it will check the counter in order to free invalid entries in
its cache.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:11 +02:00
Matan Azrad
0b4ce17a11 net/mlx5: minimize list critical sections
The mlx5 internal list utility is thread safe.

In order to synchronize list access between the threads, a RW lock is
taken for the critical sections.

The create\remove\clone\clone_free operations are in the critical
sections.

These operations are heavy and make the critical sections heavy because
they are used for memory and other resources allocations\deallocations.

Moved out the operations from the critical sections and use generation
counter in order to detect parallel allocations.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:11 +02:00
Matan Azrad
491b7137ff net/mlx5: add per-lcore cache to the list utility
When mlx5 list object is accessed by multiple cores, the list lock
counter is all the time written by all the cores what increases cache
misses in the memory caches.

In addition, when one thread accesses the list for add\remove\lookup
operation, all the other threads coming to do an operation in the list
are stuck in the lock.

Add per lcore cache to allow thread manipulations to be lockless when
the list objects are mostly reused.

Synchronization with atomic operations should be done in order to
allow threads to unregister an entry from other thread cache.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:10 +02:00
Matan Azrad
e78e5408da net/mlx5: remove cache term from the list utility
The internal mlx5 list tool is used mainly when the list objects need to
be synchronized between multiple threads.

The "cache" term is used in the internal mlx5 list API.

Next enhancements on this tool will use the "cache" term for per thread
cache management.

To prevent confusing, remove the current "cache" term from the API's
names.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:10 +02:00
Matan Azrad
e681eb0515 net/mlx5: optimize header modify action memory
Define the types of the modify header action fields to be with the
minimum size needed for the optional values range.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
2021-07-15 15:19:09 +02:00
Suanming Mou
b4edeaf3ef net/mlx5: replace flow list with indexed pool
The flow list is used to save the create flows and to be used only
when port closes all the flows need to be flushed.

This commit takes advantage of the index pool foreach operation to
flush all the allocated flows.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:19:09 +02:00
Suanming Mou
42f463395f net/mlx5: support indexed pool non-lcore operations
This commit supports the index pool non-lcore operations with
an extra cache and lcore lock.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:19:08 +02:00
Suanming Mou
64a80f1c48 net/mlx5: add indexed pool iterator
In some cases, application may want to know all the allocated
index in order to apply some operations to the allocated index.

This commit adds the indexed pool functions to support foreach
operation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:19:08 +02:00
Suanming Mou
d15c0946be net/mlx5: add indexed pool local cache
For object which wants efficient index allocate and free, local
cache will be very helpful.

Two level cache is introduced to allocate and free the index more
efficient. One as local and the other as global. The global cache
is able to save all the allocated index. That means all the allocated
index will not be freed. Once the local cache is full, the extra
index will be flushed to the global cache. Once local cache is empty,
first try to fetch more index from global, if global is still empty,
allocate new trunk with more index.

This commit adds new local cache mechanism for indexed pool.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:19:07 +02:00
Suanming Mou
58ecd3ad0b net/mlx5: allow limiting the indexed pool maximum index
Some ipool instances in the driver are used as ID\index allocator and
added other logic in order to work with limited index values.

Add a new configuration for ipool specify the maximum index value.
The ipool will ensure that no index bigger than the maximum value is
provided.

Use this configuration in ID allocator cases instead of the current
logics. This patch add the maximum ID configurable for the index pool.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-15 15:19:01 +02:00
Ruifeng Wang
1db288f941 net/mlx5: reduce unnecessary memory access in Rx
MR btree len is a constant during Rx replenish.
Moved retrieve of the value out of loop to reduce data loads.
Slight performance uplift was measured on both N1SDP and x86.

Suggested-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-15 15:17:22 +02:00
Ruifeng Wang
ff6fcd415f net/mlx5: remove redundant operations in NEON Rx
Mask of entries after the compressed CQE is covered by invalid mask of
non-compressed valid CQEs. Hence remove redundant calculation on mask.
The change showed slight performance uplift on N1SDP.

Fixes: 570acdb1da ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-15 15:16:26 +02:00
Rongwei Liu
dbd8e4102d app/testpmd: support matching on VXLAN reserved field
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
2021-07-13 15:06:43 +02:00
Rongwei Liu
630a587bfb net/mlx5: support matching on VXLAN reserved field
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.

For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
2021-07-13 15:06:43 +02:00
Gregory Etelson
730bf06652 app/testpmd: add flow matching on IPv4 version and IHL
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-13 13:15:14 +02:00
Viacheslav Ovsiienko
b6b8a1ebd4 app/testpmd: fix offloads for newly attached port
For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.

For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.

The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.

Fixes: c9cce42876 ("ethdev: remove deprecated attach/detach functions")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-07-13 11:52:05 +02:00