This patch add support for multiple modems by assigning
a modem id as dev args in vdev creation.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
This patch adds dev args to take max queues as input
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
Added device information to capture explicitly the assumption
of the input/output data byte endianness being processed.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This add support for AEAD and proto offload with raw APIs
for dpaa_sec driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch improves the raw vector support in dpaa_sec driver
for authonly and chain usecase.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch add raw vector API framework for dpaa_sec driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch improves error conditions and support of
Wireless algos with raw buffers.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
add support for out of order processing with raw vector APIs.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Auth only with raw buffer APIs has been supported in this patch.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This path add framework for raw API support.
The initial patch only test cipher only part.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch renames the sgl to src_sgl in struct rte_crypto_sym_vec
to help differentiating between source and destination sgl.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Adds max queue pairs limit devargs for crypto cnxk driver. This
can be used to set a limit on the number of maximum queue pairs
supported by the device. The default value is 63.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Reviewed-by: Anoob Joseph <anoobj@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add RTE_ prefix to internal API defined in public header.
Use the prefix instead of double underscore.
Use uppercase for macros in the case of name conflict.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.
Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a new devarg: mr_mempool_reg_en.
On TX slow path, i.e. when an MR key for the address of the buffer
to send is not in the local cache, first try to retrieve it from
the database of registered mempools. Supported are direct and indirect
mbufs, as well as externally-attached ones from MLX5 MPRQ feature.
Lookup in the database of non-mempool memory is used as the last resort.
RX mempools are registered regardless of the devarg value.
On RX data path only the local cache and the mempool database is used.
If implicit mempool registration is disabled, these mempools
are unregistered at port stop, releasing the MRs.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add internal API to register mempools, that is, to create memory
regions (MR) for their memory and store them in a separate database.
Implementation deals with multi-process, so that class drivers don't
need to. Each protection domain has its own database. Memory regions
can be shared within a database if they represent a single hugepage
covering one or more mempools entirely.
Add internal API to lookup an MR key for an address that belongs
to a known mempool. It is a responsibility of a class driver
to extract the mempool from an mbuf.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Removing the rawdev based octeontx2-ep driver as the dependent
common/octeontx2 will soon be going away. Moreover this driver is no
longer required as the net/octeontx_ep driver is sufficient.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
Removing the rawdev based octeontx2-dma driver as the dependent
common/octeontx2 will be soon be going away. Also a new DMA driver will
be coming in this place once the rte_dmadev library is in.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Skeleton dmadevice driver, on the lines of rawdev skeleton, is for
showcasing of the dmadev library.
Design of skeleton involves a virtual device which is plugged into VDEV
bus on initialization.
Also, enable compilation of dmadev skeleton drivers.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
The 'dmadev' is a generic type of DMA device.
This patch introduce the 'dmadev' device allocation functions.
The infrastructure is prepared to welcome drivers in drivers/dma/
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
The driver code had wrong assumption that all the addresses to ring buffers
in the secondary process are the same as those in the primary process. This
is not always correct as the channels could be mapped to different
addresses in the secondary process.
Fix this by keeping track of all the mapped addresses from the primary
process in the shared uio_res, and have second process map to the same
addresses.
Fixes: 831dba47bd36 ("bus/vmbus: add Hyper-V virtual bus support")
Cc: stable@dpdk.org
Reported-by: Jonathan Erb <jonathan.erb@banduracyber.com>
Signed-off-by: Long Li <longli@microsoft.com>
Acked-by: Stephen Hemminger <sthemmin@microsoft.com>
CN98xx SoC comes up with two CPT blocks wrt
CN96xx, CN93xx, to achieve higher performance.
Adding support to allocate all LFs of VF with even BDF from CPT0
and all LFs of VF with odd BDF from CPT1.
If LFs are not available in one block then they will be allocated
from alternate block.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch fixes the issue by adjusting the outbound size after
turbodecoding when the appended CRC is meant to be dropped.
Fixes: f404dfe35cc3 ("baseband/acc100: support 4G processing")
Cc: stable@dpdk.org
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
This implements in PMD the option to drop the CB CRC
after 4G decoding to help transport block concatenation.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
This is to support the case for operation
where CRC16 is to be appended or checked.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
The driver consume 208B of tailroom but has min requirement
as 8B and headroom needed is 48B, but minimum requirement
is set as 24B. This patch correct minimum requirements
which application should honour.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Added support for 256 bit key length for ZUC in
crypto_cn10k PMD.
Added support for digest length of 8 and 16 bytes
for ZUC with 256 bit key length.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Set proper bits in the context based on key length for PDCP
algorithms. This is required to support ZUC 256bit key cases.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
No barriers are required when stats are incremented or read.
Fixes: 96fd2bd69b58 ("net/sfc: support flow action count in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
When adding multicast and unicast MAC addresses, three descriptors and
one descriptor are required for querying or adding MAC VLAN table,
respectively. This patch uses the number of descriptors as input
parameter to complete this task to make the function more secure.
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This problem occurs in the following scenarios:
1) reset is encountered when the adapter is running.
2) set a new default MAC address
After the above two steps, the old default MAC address should be not
take effect. But the current behavior is contrary to that. This is due
to the change of the "default_addr_setted" in hw->mac from 'true' to
'false' after the reset. As a result, the old MAC address is not removed
when the new default MAC address is set. This variable controls whether
to delete the old default MAC address when setting the default MAC
address. It is only used when the mac_addr_set API is called for the
first time. In fact, when a unicast MAC address is deleted, if the
address isn't in the MAC address table, the driver doesn't return
failure. So this patch remove the redundant and troublesome variables to
resolve this problem.
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ixgbe_dev_rx_queue_release
rte_eth_dev_close
-->ixgbe_dev_close
---->ixgbe_dev_free_queues
------>ixgbe_dev_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>i40e_dev_rx_queue_release
rte_eth_dev_close
-->i40e_dev_close
---->i40e_dev_free_queues
------>i40e_dev_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ice_rx_queue_release
rte_eth_dev_close
-->ice_dev_close
---->ice_free_queues
------>ice_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>em_rx_queue_release
rte_eth_dev_close
-->eth_em_close
---->em_dev_free_queues
------>em_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The link state change interrupt handler of the NFP PMD will delay the
actual LSC work for a short period to ensure the link is stable. If the
link of the port changes state and the port is closed immediately after
the link event then a segmentation fault will occur. This happens
because the delayed LSC work eventually triggers and this logic will try
to access private port data that had been released when the port was
closed.
Fixes: 6c53f87b3497 ("nfp: add link status interrupt")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.
To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Some drivers don't need Rx and Tx queue release callback, make them
optional. Clean up empty queue release callbacks for some drivers.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Upon MCAM allocation failure, free counters only if counters
were allocated earlier for the flow rule.
Fixes: f9af90807466 ("common/cnxk: add mcam utility API")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>