Windows GSG included a section only on virt2phys driver installation,
but not on NetUIO. The content of the section duplicated documentation
in dpdk-kmods, but contained no links to it, only a reference.
Add subsections for virt2phys and NetUIO, explaining their roles.
Refer to documenttion in dpdk-kmods as an authoritative source,
but leave specific diagnostic and usage hints in the GSG.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Independent header compilation test (chkincs) was disabled on Windows.
The comment stated that the shebang line in the generator script was not
working. Meson 0.57.0, currently recommended for Windows, successfully
parses that line and invokes the script. Remove the OS restriction
as its reason no longer applies.
Fixes: 05050ac4ce99 ("build: add header includes check")
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The header was not intended to be a public one.
DPDK users should use `rte_mem_virt2iova()` to translate addresses.
Other virt2phys users should use the header from the driver instead.
Fixes: 2a5d547a4a9b ("eal/windows: implement basic memory management")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
On Windows, -l/--lcores EAL option was unable to process CPU sets
containing CPUs other than 0 and 1, because CPU_COUNT() macro
only checked these CPUs in the set. Fix CPU_COUNT() by enumerating
all possible CPU indices.
Fixes: e8428a9d89f1 ("eal/windows: add some basic functions and macros")
Cc: stable@dpdk.org
Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
No barriers are required when stats are incremented or read.
Fixes: 96fd2bd69b58 ("net/sfc: support flow action count in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
When adding multicast and unicast MAC addresses, three descriptors and
one descriptor are required for querying or adding MAC VLAN table,
respectively. This patch uses the number of descriptors as input
parameter to complete this task to make the function more secure.
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This problem occurs in the following scenarios:
1) reset is encountered when the adapter is running.
2) set a new default MAC address
After the above two steps, the old default MAC address should be not
take effect. But the current behavior is contrary to that. This is due
to the change of the "default_addr_setted" in hw->mac from 'true' to
'false' after the reset. As a result, the old MAC address is not removed
when the new default MAC address is set. This variable controls whether
to delete the old default MAC address when setting the default MAC
address. It is only used when the mac_addr_set API is called for the
first time. In fact, when a unicast MAC address is deleted, if the
address isn't in the MAC address table, the driver doesn't return
failure. So this patch remove the redundant and troublesome variables to
resolve this problem.
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ixgbe_dev_rx_queue_release
rte_eth_dev_close
-->ixgbe_dev_close
---->ixgbe_dev_free_queues
------>ixgbe_dev_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>i40e_dev_rx_queue_release
rte_eth_dev_close
-->i40e_dev_close
---->i40e_dev_free_queues
------>i40e_dev_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ice_rx_queue_release
rte_eth_dev_close
-->ice_dev_close
---->ice_free_queues
------>ice_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>em_rx_queue_release
rte_eth_dev_close
-->eth_em_close
---->em_dev_free_queues
------>em_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The link state change interrupt handler of the NFP PMD will delay the
actual LSC work for a short period to ensure the link is stable. If the
link of the port changes state and the port is closed immediately after
the link event then a segmentation fault will occur. This happens
because the delayed LSC work eventually triggers and this logic will try
to access private port data that had been released when the port was
closed.
Fixes: 6c53f87b3497 ("nfp: add link status interrupt")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.
To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Some drivers don't need Rx and Tx queue release callback, make them
optional. Clean up empty queue release callbacks for some drivers.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Upon MCAM allocation failure, free counters only if counters
were allocated earlier for the flow rule.
Fixes: f9af90807466 ("common/cnxk: add mcam utility API")
Cc: stable@dpdk.org
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reflect globally enabled Rx and Tx offloads in queue conf.
Also fix issue with lmt data prepare for multi seg.
Fixes: a24af6361e37 ("net/cnxk: add Tx queue setup and release")
Fixes: a86144cd9ded ("net/cnxk: add Rx queue setup and release")
Fixes: 305ca2c4c382 ("net/cnxk: support multi-segment vector Tx")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds capabitlities for AES_CBC and HMAC_SHA1 for 9k
security offload.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Sets IP6_UDP_OPT in NIX RX config to allow optional
UDP checksum for IPv6 in case of security offload.
Also disable drop_re when inline inbound is enabled.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds support to update ethertype for mixed IPsec tunnel
versions. And also sets et_overwr for inbound IPsec.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds anti replay support for cn9k platform using
SW anti replay check.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to create and submit CPT instructions on Tx
on CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to receive CPT processed packets on Rx via
second pass on CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to create and submit CPT instructions on Tx
on CN9K SoC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to receive CPT processed packets on Rx for
CN9K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.
This patch also changes dpdk-devbind.py to list new inline
device as misc device.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to configure flow rules with inline IPsec action.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Currently only NIX0 conf is setup in AURA for backpressure.
This patch adds support for NIX1 as well.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch provides applications to add CQ support
in Tx path. This enables packet completion events on
CQ for requested packets.
Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Restore SQB AURA/POOL limit before destroying SQB to be
able to drain all the buffers from the aura.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
For CPT LF IQ enable, set CPT_LF_CTL[ENA] before setting
CPT_LF_INPROG[EENA] to true.
For CPT LF IQ disable, align sequence to that of HRM.
Also this patch aligns space for instructions in CPT LF
to ROC_ALIGN to make complete memory cache aligned and
has other minor fixes/additions.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Disable CQ drop when inline inbound is enabled. CQ drop
is not supported for second pass IPsec decrypted packets.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add API to support setting up NIX inline inbound and
NIX inline outbound. In case of inbound, SA base is setup
on NIX PFFUNC and in case of outbound, required number of
CPT LF's are attached to NIX PFFUNC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to init and fini inline device with NIX LF,
SSO LF and SSOW LF for inline inbound IPSec in CN10K.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add API to setup NIX inline device IRQ's. This registers
IRQ's for errors in case of NIX, CPT LF, SSOW and get wor
interrupt in case of SSO.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Change NIX debug API and queue API interface for use by
internal NIX inline device initialization.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Rework interface of SSO internal functions to use for NIX inline dev's
SSO LF's.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add security support to init cn9k fast path SA data
for AES GCM and AES CBC + HMAC SHA1.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
reported by "gcc (GCC) 12.0.0 20211003 (experimental)":
../drivers/net/cxgbe/cxgbe_ethdev.c:
In function ‘cxgbe_dev_rx_queue_setup’:
../drivers/net/cxgbe/cxgbe_ethdev.c:682:24:
error: the comparison will always evaluate as ‘true’ for the
address of ‘fl’ will never be NULL [-Werror=address]
682 | if ((&rxq->fl) != NULL)
| ^~
Fixing it by removing useless check.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Adjust parameters order to eth_xstats_get_by_id_t prototype.
Make ids the second parameter similar to eth_xstats_get_by_id_t.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update xstats by IDs callbacks documentation in accordance with
ethdev usage of these callbacks. Document valid combinations of
input arguments to make driver implementation simpler.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Relax requirements on get xstats names by IDs. After the patch
corresponding the driver operation is called with non-NULL ids
and xstats_names parameters only.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Document valid combinations of input arguments in accordance with
current implementation in ethdev.
Fixes: 79c913a42f0e ("ethdev: retrieve xstats by ID")
Cc: stable@dpdk.org
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>