Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ixgbe_dev_rx_queue_release
rte_eth_dev_close
-->ixgbe_dev_close
---->ixgbe_dev_free_queues
------>ixgbe_dev_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>i40e_dev_rx_queue_release
rte_eth_dev_close
-->i40e_dev_close
---->i40e_dev_free_queues
------>i40e_dev_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ice_rx_queue_release
rte_eth_dev_close
-->ice_dev_close
---->ice_free_queues
------>ice_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>em_rx_queue_release
rte_eth_dev_close
-->eth_em_close
---->em_dev_free_queues
------>em_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The link state change interrupt handler of the NFP PMD will delay the
actual LSC work for a short period to ensure the link is stable. If the
link of the port changes state and the port is closed immediately after
the link event then a segmentation fault will occur. This happens
because the delayed LSC work eventually triggers and this logic will try
to access private port data that had been released when the port was
closed.
Fixes: 6c53f87b3497 ("nfp: add link status interrupt")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.
To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Some drivers don't need Rx and Tx queue release callback, make them
optional. Clean up empty queue release callbacks for some drivers.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Upon MCAM allocation failure, free counters only if counters
were allocated earlier for the flow rule.
Fixes: f9af90807466 ("common/cnxk: add mcam utility API")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reflect globally enabled Rx and Tx offloads in queue conf.
Also fix issue with lmt data prepare for multi seg.
Fixes: a24af6361e37 ("net/cnxk: add Tx queue setup and release")
Fixes: a86144cd9ded ("net/cnxk: add Rx queue setup and release")
Fixes: 305ca2c4c382 ("net/cnxk: support multi-segment vector Tx")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds capabitlities for AES_CBC and HMAC_SHA1 for 9k
security offload.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Sets IP6_UDP_OPT in NIX RX config to allow optional
UDP checksum for IPv6 in case of security offload.
Also disable drop_re when inline inbound is enabled.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds support to update ethertype for mixed IPsec tunnel
versions. And also sets et_overwr for inbound IPsec.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds anti replay support for cn9k platform using
SW anti replay check.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to create and submit CPT instructions on Tx
on CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to receive CPT processed packets on Rx via
second pass on CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to create and submit CPT instructions on Tx
on CN9K SoC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to receive CPT processed packets on Rx for
CN9K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.
This patch also changes dpdk-devbind.py to list new inline
device as misc device.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to configure flow rules with inline IPsec action.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Currently only NIX0 conf is setup in AURA for backpressure.
This patch adds support for NIX1 as well.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch provides applications to add CQ support
in Tx path. This enables packet completion events on
CQ for requested packets.
Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Restore SQB AURA/POOL limit before destroying SQB to be
able to drain all the buffers from the aura.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
For CPT LF IQ enable, set CPT_LF_CTL[ENA] before setting
CPT_LF_INPROG[EENA] to true.
For CPT LF IQ disable, align sequence to that of HRM.
Also this patch aligns space for instructions in CPT LF
to ROC_ALIGN to make complete memory cache aligned and
has other minor fixes/additions.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Disable CQ drop when inline inbound is enabled. CQ drop
is not supported for second pass IPsec decrypted packets.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add API to support setting up NIX inline inbound and
NIX inline outbound. In case of inbound, SA base is setup
on NIX PFFUNC and in case of outbound, required number of
CPT LF's are attached to NIX PFFUNC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to init and fini inline device with NIX LF,
SSO LF and SSOW LF for inline inbound IPSec in CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add API to setup NIX inline device IRQ's. This registers
IRQ's for errors in case of NIX, CPT LF, SSOW and get wor
interrupt in case of SSO.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Change NIX debug API and queue API interface for use by
internal NIX inline device initialization.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Rework interface of SSO internal functions to use for NIX inline dev's
SSO LF's.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add security support to init cn9k fast path SA data
for AES GCM and AES CBC + HMAC SHA1.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
reported by "gcc (GCC) 12.0.0 20211003 (experimental)":
../drivers/net/cxgbe/cxgbe_ethdev.c:
In function ‘cxgbe_dev_rx_queue_setup’:
../drivers/net/cxgbe/cxgbe_ethdev.c:682:24:
error: the comparison will always evaluate as ‘true’ for the
address of ‘fl’ will never be NULL [-Werror=address]
682 | if ((&rxq->fl) != NULL)
| ^~
Fixing it by removing useless check.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Adjust parameters order to eth_xstats_get_by_id_t prototype.
Make ids the second parameter similar to eth_xstats_get_by_id_t.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update xstats by IDs callbacks documentation in accordance with
ethdev usage of these callbacks. Document valid combinations of
input arguments to make driver implementation simpler.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Relax requirements on get xstats names by IDs. After the patch
corresponding the driver operation is called with non-NULL ids
and xstats_names parameters only.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Document valid combinations of input arguments in accordance with
current implementation in ethdev.
Fixes: 79c913a42f0e ("ethdev: retrieve xstats by ID")
Cc: stable@dpdk.org
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The af_packet pmd driver binds to a raw socket and allows sending and
receiving of packets through the kernel.
Since commit [1], the kernel strips the vlan tags early in
__netif_receive_skb_core(), so we receive untagged packets while running
with the af_packet pmd.
Luckily for us, the skb vlan-related fields are still populated from the
stripped vlan tags, so we end up having all the information that we need
in the mbuf.
Having the pmd driver support DEV_RX_OFFLOAD_VLAN_STRIP allows the
application to control the desired vlan stripping behavior, until we
have a way to describe offloads that can't be disabled by pmd drivers.
This patch will cause a change in the default way that the af_packet pmd
treats received vlan-tagged frames. While previously, the application
was required to check the PKT_RX_VLAN_STRIPPED flag, after this patch,
the pmd will re-insert the vlan tag transparently to the user, unless
the DEV_RX_OFFLOAD_VLAN_STRIP is enabled in rxmode.offloads.
I've attempted a preliminary benchmark to understand if the change could
cause a sizable performance hit.
Setup:
Two virtual machines running on top of an ESXi hypervisor
Tx: DPDK app (running on top of vmxnet3 PMD)
Rx: af_packet (running on top of a kernel vmxnet3 interface)
Packet size :68 (packet contains a vlan tag)
Rates:
Tx - 1.419 Mpps
Rx (without vlan insertion) - 1227636 pps
Rx (with vlan insertion) - 1220081 pps
At a first glance, we don't seem to have a large degradation in terms of
packet rate.
[1]
https://github.com/torvalds/linux/commit/bcc6d47903612c3861201cc3a866fb60
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support to fetch port and queue stats via xstats API. Also remove
queue stats from basic stats because they're now available via xstats
API for the VF.
Signed-off-by: Nikhil Vasoya <nikhil.vasoya@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Previously, memif socket hash is always allocated on NUMA socket 0.
If the application is entirely running on another NUMA socket and EAL
--socket-limit prevents memory allocation on NUMA socket 0, memif
creation fails with "HASH: memory allocation failed" error.
This patch allows allocating memif socket hash on any NUMA socket.
Signed-off-by: Junxiao Shi <git@mail1.yoursunny.com>
Reviewed-by: Jakub Grajciar <jgrajcia@cisco.com>
An issue exists on some HW revisions whereby if a CQ overflows
NIX may have undefined behavior, e.g. free incorrect buffers.
Implementing a workaround for this known HW issue.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
As per an known HW issue RVUM interrupts may get dropped, If an RVUM
interrupt event occurs when PCCPF_XXX_MSIX_CAP_HDR[MSIXEN]=0 then no
interrupt is triggered, which is expected. But after MSIXEN is set to
1, subsequently if same interrupts event occurs again, still no
interrupt will be triggered.
As a workaround, all RVUM interrupt lines should be cleared between
MSIXEN=0 and MSIXEN=1.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
It was announced that `min` and `max` fields and variables in public
headers will be renamed to avoid clash with `min` and `max` macros
defined in Windows headers. However, it is unnecessary, because these
are function-like macros, which are not expanded unless the next token
is an opening brace. It cannot happen for integer fields and variables.
Remove the deprecation notices.
Fixes: c4379ee599ef ("doc: announce API changes for Windows compatibility")
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Definition of `rte_ether_addr` structure used a workaround allowing DPDK
and Windows SDK headers to be used in the same file, because Windows SDK
defines `s_addr` as a macro. Rename `s_addr` to `src_addr` and `d_addr`
to `dst_addr` to avoid the conflict and remove the workaround.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2021-July/215270.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>