16364 Commits

Author SHA1 Message Date
Hemant Agrawal
915cdc075d baseband/la12xx: support multiple modems
This patch add support for multiple modems by assigning
a modem id as dev args in vdev creation.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
2021-10-18 20:12:00 +02:00
Hemant Agrawal
ee36ba0f30 baseband/la12xx: add devargs option for max queues
This patch adds dev args to take  max queues as input

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
2021-10-18 20:11:56 +02:00
Nipun Gupta
f218a1f920 baseband/la12xx: introduce NXP LA12xx driver
This patch introduce the baseband device drivers for NXP's
LA1200 series software defined baseband modem.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
2021-10-18 20:11:23 +02:00
Nicolas Chautru
ab4e19097b bbdev: add device info for data endianness
Added device information to capture explicitly the assumption
of the input/output data byte endianness being processed.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-18 20:11:16 +02:00
Gagandeep Singh
fe6a8ee275 crypto/dpaa_sec: support AEAD and proto with raw API
This add support for AEAD and proto offload with raw APIs
for dpaa_sec driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
78156d38e1 crypto/dpaa_sec: support authonly and chain with raw API
This patch improves the raw vector support in dpaa_sec driver
for authonly and chain usecase.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
9d5f73c2d2 crypto/dpaa_sec: support raw datapath API
This patch add raw vector API framework for dpaa_sec driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
3da64325d6 crypto/dpaa2_sec: enhance error checks with raw buffer API
This patch improves error conditions and support of
Wireless algos with raw buffers.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
6bb2ce7344 crypto/dpaa2_sec: support OOP with raw buffer API
add support for out of order processing with raw vector APIs.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
f393cd8e85 crypto/dpaa2_sec: support AEAD with raw buffer API
add raw vector API support for AEAD algos.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
aa6ec1fd84 crypto/dpaa2_sec: support authenc with raw buffer API
This patch supports AUTHENC with raw buufer APIs

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
d9d36d6ff5 crypto/dpaa2_sec: support auth only with raw buffer API
Auth only with raw buffer APIs has been supported in this patch.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Gagandeep Singh
4a81d34a03 crypto/dpaa2_sec: support raw datapath API
This path add framework for raw API support.
The initial patch only test cipher only part.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-17 19:32:13 +02:00
Hemant Agrawal
10488d59ae cryptodev: rename field in vector struct
This patch renames the sgl to src_sgl in struct rte_crypto_sym_vec
to help differentiating between source and destination sgl.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-10-17 19:31:15 +02:00
Ankur Dwivedi
135e0b71fa crypto/cnxk: add max queue pairs limit option
Adds max queue pairs limit devargs for crypto cnxk driver. This
can be used to set a limit on the number of maximum queue pairs
supported by the device. The default value is 63.

Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Reviewed-by: Anoob Joseph <anoobj@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2021-10-16 14:57:14 +02:00
Andrew Rybchenko
cb77b060eb mempool: add namespace to driver register macro
Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2021-10-20 10:00:18 +02:00
Andrew Rybchenko
ad276d5c7e mempool: add namespace to internal helpers
Add RTE_ prefix to internal API defined in public header.
Use the prefix instead of double underscore.
Use uppercase for macros in the case of name conflict.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2021-10-20 10:00:18 +02:00
Andrew Rybchenko
c47d7b90a1 mempool: add namespace to flags
Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.

Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2021-10-20 10:00:16 +02:00
Dmitry Kozlyuk
fec28ca0e3 net/mlx5: support mempool registration
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a new devarg: mr_mempool_reg_en.

On TX slow path, i.e. when an MR key for the address of the buffer
to send is not in the local cache, first try to retrieve it from
the database of registered mempools. Supported are direct and indirect
mbufs, as well as externally-attached ones from MLX5 MPRQ feature.
Lookup in the database of non-mempool memory is used as the last resort.

RX mempools are registered regardless of the devarg value.
On RX data path only the local cache and the mempool database is used.
If implicit mempool registration is disabled, these mempools
are unregistered at port stop, releasing the MRs.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-19 16:35:16 +02:00
Dmitry Kozlyuk
690b2a88c2 common/mlx5: add mempool registration facilities
Add internal API to register mempools, that is, to create memory
regions (MR) for their memory and store them in a separate database.
Implementation deals with multi-process, so that class drivers don't
need to. Each protection domain has its own database. Memory regions
can be shared within a database if they represent a single hugepage
covering one or more mempools entirely.

Add internal API to lookup an MR key for an address that belongs
to a known mempool. It is a responsibility of a class driver
to extract the mempool from an mbuf.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-19 16:35:16 +02:00
Radha Mohan Chintakuntla
821f60c7f4 raw/octeontx2_ep: remove driver
Removing the rawdev based octeontx2-ep driver as the dependent
common/octeontx2 will soon be going away. Moreover this driver is no
longer required as the net/octeontx_ep driver is sufficient.

Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
2021-10-18 18:34:04 +02:00
Radha Mohan Chintakuntla
a59745ebcc raw/octeontx2_dma: remove driver
Removing the rawdev based octeontx2-dma driver as the dependent
common/octeontx2 will be soon be going away. Also a new DMA driver will
be coming in this place once the rte_dmadev library is in.

Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
2021-10-18 18:33:45 +02:00
Kevin Laatz
ea8cf0f853 dmadev: add burst capacity API
Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-18 11:17:30 +02:00
Bruce Richardson
5e0f859127 dmadev: add channel status check for testing use
Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2021-10-18 11:17:21 +02:00
Chengwen Feng
05d5fc66a2 dma/skeleton: introduce skeleton driver
Skeleton dmadevice driver, on the lines of rawdev skeleton, is for
showcasing of the dmadev library.

Design of skeleton involves a virtual device which is plugged into VDEV
bus on initialization.

Also, enable compilation of dmadev skeleton drivers.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-17 20:49:58 +02:00
Chengwen Feng
b36970f2e1 dmadev: introduce DMA device library
The 'dmadev' is a generic type of DMA device.

This patch introduce the 'dmadev' device allocation functions.

The infrastructure is prepared to welcome drivers in drivers/dma/

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
2021-10-17 20:49:57 +02:00
Long Li
70cdd92e04 bus/vmbus: fix ring buffer mapping in secondary process
The driver code had wrong assumption that all the addresses to ring buffers
in the secondary process are the same as those in the primary process. This
is not always correct as the channels could be mapped to different
addresses in the secondary process.

Fix this by keeping track of all the mapped addresses from the primary
process in the shared uio_res, and have second process map to the same
addresses.

Fixes: 831dba47bd36 ("bus/vmbus: add Hyper-V virtual bus support")
Cc: stable@dpdk.org

Reported-by: Jonathan Erb <jonathan.erb@banduracyber.com>
Signed-off-by: Long Li <longli@microsoft.com>
Acked-by: Stephen Hemminger <sthemmin@microsoft.com>
2021-10-13 13:55:09 +02:00
Archana Muniganti
5824580017 crypto/cnxk: support inner checksum
Add inner checksum support for cn10k.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-08 21:40:16 +02:00
Archana Muniganti
e705d30552 crypto/cnxk: use IE engine group for IPsec
Use IE engine group for cn9k IPsec.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-10-08 21:31:07 +02:00
Tejasree Kondoj
7bf4413d5e crypto/octeontx2: fix lookaside IPsec capabilities
Adding cbc, sha1-hmac and sha256-hmac to lookaside IPsec capabilities.

Fixes: 8f685ec2d545 ("crypto/octeontx2: support AES-CBC SHA1-HMAC")
Fixes: 61baeec4682c ("crypto/octeontx2: support AES-CBC SHA256-128-HMAC")
Cc: stable@dpdk.org

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-10-08 21:31:07 +02:00
Tejasree Kondoj
7658d035ac common/cnxk: support 98XX CPT dual block
CN98xx SoC comes up with two CPT blocks wrt
CN96xx, CN93xx, to achieve higher performance.

Adding support to allocate all LFs of VF with even BDF from CPT0
and all LFs of VF with odd BDF from CPT1.
If LFs are not available in one block then they will be allocated
from alternate block.

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-08 21:31:07 +02:00
Nicolas Chautru
b0df6b29e7 baseband/acc100: fix 4GUL outbound size
This patch fixes the issue by adjusting the outbound size after
turbodecoding when the appended CRC is meant to be dropped.

Fixes: f404dfe35cc3 ("baseband/acc100: support 4G processing")
Cc: stable@dpdk.org

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
2021-10-08 21:31:07 +02:00
Nicolas Chautru
765f42f1bd baseband/acc100: support 4G CRC drop
This implements in PMD the option to drop the CB CRC
after 4G decoding to help transport block concatenation.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
2021-10-08 21:31:07 +02:00
Nicolas Chautru
a88352c366 baseband/turbo_sw: support CRC16
This is to support the case for operation
where CRC16 is to be appended or checked.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
2021-10-08 21:31:07 +02:00
Akhil Goyal
7bfca3894a crypto/octeontx2: update minimum headroom and tailroom
The driver consume 208B of tailroom but has min requirement
as 8B and headroom needed is 48B, but minimum requirement
is set as 24B. This patch correct minimum requirements
which application should honour.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2021-10-08 21:31:07 +02:00
Vidya Sagar Velumuri
29742632ac crypto/cnxk: support ZUC with 256-bit key
Added support for 256 bit key length for ZUC in
crypto_cn10k PMD.
Added support for digest length of 8 and 16 bytes
for ZUC with 256 bit key length.

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-08 21:31:07 +02:00
Vidya Sagar Velumuri
a90db80d7d common/cnxk: set key length for PDCP algos
Set proper bits in the context based on key length for PDCP
algorithms. This is required to support ZUC 256bit key cases.

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-08 21:31:07 +02:00
Tejasree Kondoj
2d5ca27281 common/cnxk: support UDP port verification
Adding support to verify UDP encapsulation ports
in IPsec inbound.

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2021-10-08 21:31:07 +02:00
Andrew Rybchenko
34f92e82dd net/sfc: relax SW packets/bytes atomic ops memory ordering
No barriers are required when stats are incremented or read.

Fixes: 96fd2bd69b58 ("net/sfc: support flow action count in transfer rules")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-07 16:57:42 +02:00
Huisong Li
60a6b4a574 net/hns3: fix input parameters of MAC functions
When adding multicast and unicast MAC addresses, three descriptors and
one descriptor are required for querying or adding MAC VLAN table,
respectively. This patch uses the number of descriptors as input
parameter to complete this task to make the function more secure.

Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-10-07 13:56:16 +02:00
Huisong Li
19e67d8ebc net/hns3: fix residual MAC after setting default MAC
This problem occurs in the following scenarios:
1) reset is encountered when the adapter is running.
2) set a new default MAC address

After the above two steps, the old default MAC address should be not
take effect. But the current behavior is contrary to that. This is due
to the change of the "default_addr_setted" in hw->mac from 'true' to
'false' after the reset. As a result, the old MAC address is not removed
when the new default MAC address is set. This variable controls whether
to delete the old default MAC address when setting the default MAC
address. It is only used when the mac_addr_set API is called for the
first time. In fact, when a unicast MAC address is deleted, if the
address isn't in the MAC address table, the driver doesn't return
failure. So this patch remove the redundant and troublesome variables to
resolve this problem.

Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-10-07 13:56:16 +02:00
Yunjian Wang
a4ae7f51d2 net/ixgbe: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:

rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ixgbe_dev_rx_queue_release
rte_eth_dev_close
-->ixgbe_dev_close
---->ixgbe_dev_free_queues
------>ixgbe_dev_rx_queue_release
      (not been called due to nb_rx_queues and nb_tx_queues are 0)

And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.

Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-07 13:38:16 +02:00
Yunjian Wang
e3188d5f99 net/i40e: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:

rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>i40e_dev_rx_queue_release
rte_eth_dev_close
-->i40e_dev_close
---->i40e_dev_free_queues
------>i40e_dev_rx_queue_release
      (not been called due to nb_rx_queues and nb_tx_queues are 0)

And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.

Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-07 13:38:16 +02:00
Yunjian Wang
d3778bf39a net/ice: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:

rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ice_rx_queue_release
rte_eth_dev_close
-->ice_dev_close
---->ice_free_queues
------>ice_rx_queue_release
      (not been called due to nb_rx_queues and nb_tx_queues are 0)

And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.

Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-07 13:38:16 +02:00
Yunjian Wang
09cbfa2da4 net/e1000: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:

rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>em_rx_queue_release
rte_eth_dev_close
-->eth_em_close
---->em_dev_free_queues
------>em_rx_queue_release
      (not been called due to nb_rx_queues and nb_tx_queues are 0)

And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.

Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-07 13:38:16 +02:00
Andrew Rybchenko
b225783dda ethdev: remove legacy mirroring API
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-07 13:02:26 +02:00
Heinrich Kuhn
851f03e1ea net/nfp: cancel delayed LSC work in port close
The link state change interrupt handler of the NFP PMD will delay the
actual LSC work for a short period to ensure the link is stable. If the
link of the port changes state and the port is closed immediately after
the link event then a segmentation fault will occur. This happens
because the delayed LSC work eventually triggers and this logic will try
to access private port data that had been released when the port was
closed.

Fixes: 6c53f87b3497 ("nfp: add link status interrupt")
Cc: stable@dpdk.org

Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
2021-10-07 12:19:53 +02:00
Xueming Li
7483341ae5 ethdev: change queue release callback
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.

To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-06 19:16:03 +02:00
Xueming Li
49ed322469 ethdev: make queue release callback optional
Some drivers don't need Rx and Tx queue release callback, make them
optional. Clean up empty queue release callbacks for some drivers.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2021-10-06 19:16:03 +02:00
Satheesh Paul
d74d3744da common/cnxk: fix freeing MCAM counter
Upon MCAM allocation failure, free counters only if counters
were allocated earlier for the flow rule.

Fixes: f9af90807466 ("common/cnxk: add mcam utility API")
Cc: stable@dpdk.org

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-04 17:43:07 +02:00