mempool/cnxk: add build infra and doc

Add the meson based build infrastructure for Marvell
CNXK mempool driver along with stub implementations
for mempool device probe.

Also add Marvell CNXK mempool base documentation.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
This commit is contained in:
Ashwin Sekhar T K 2021-04-08 15:20:39 +05:30 committed by Jerin Jacob
parent 51dc6a80f8
commit 2da3159197
9 changed files with 164 additions and 1 deletions

View File

@ -500,6 +500,13 @@ M: Artem V. Andreev <artem.andreev@oktetlabs.ru>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: drivers/mempool/bucket/
Marvell cnxk
M: Ashwin Sekhar T K <asekhar@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
Marvell OCTEON TX2
M: Jerin Jacob <jerinj@marvell.com>
M: Nithin Dabilpuram <ndabilpuram@marvell.com>

View File

@ -0,0 +1,55 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(C) 2021 Marvell.
cnxk NPA Mempool Driver
=======================
The cnxk NPA PMD (**librte_mempool_cnxk**) provides mempool driver support for
the integrated mempool device found in **Marvell OCTEON CN9K/CN10K** SoC family.
More information about cnxk SoC can be found at `Marvell Official Website
<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
Features
--------
cnxk NPA PMD supports:
- Up to 128 NPA LFs
- 1M Pools per LF
- HW mempool manager
- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path.
- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path.
Prerequisites and Compilation procedure
---------------------------------------
See :doc:`../platform/cnxk` for setup information.
Pre-Installation Configuration
------------------------------
Debugging Options
~~~~~~~~~~~~~~~~~
.. _table_cnxk_mempool_debug_options:
.. table:: cnxk mempool debug options
+---+------------+-------------------------------------------------------+
| # | Component | EAL log command |
+===+============+=======================================================+
| 1 | NPA | --log-level='pmd\.mempool.cnxk,8' |
+---+------------+-------------------------------------------------------+
Standalone mempool device
~~~~~~~~~~~~~~~~~~~~~~~~~
The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool
devices available in the system. In order to avoid, the end user to bind the
mempool device prior to use ethdev and/or eventdev device, the respective
driver configures an NPA LF and attach to the first probed ethdev or eventdev
device. In case, if end user need to run mempool as a standalone device
(without ethdev or eventdev), end user needs to bind a mempool device using
``usertools/dpdk-devbind.py``

View File

@ -11,6 +11,7 @@ application through the mempool API.
:maxdepth: 2
:numbered:
cnxk
octeontx
octeontx2
ring

View File

@ -142,6 +142,9 @@ HW Offload Drivers
This section lists dataplane H/W block(s) available in cnxk SoC.
#. **Mempool Driver**
See :doc:`../mempool/cnxk` for NPA mempool driver information.
Procedure to Setup Platform
---------------------------

View File

@ -63,6 +63,8 @@ New Features
* Added common/cnxk driver consisting of common API to be used by
net, crypto and event PMD's.
* Added mempool/cnxk driver which provides the support for the integrated
mempool device.
* **Enhanced ethdev representor syntax.**

View File

@ -0,0 +1,78 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2021 Marvell.
*/
#include <rte_atomic.h>
#include <rte_bus_pci.h>
#include <rte_common.h>
#include <rte_devargs.h>
#include <rte_eal.h>
#include <rte_io.h>
#include <rte_kvargs.h>
#include <rte_malloc.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include "roc_api.h"
static int
npa_remove(struct rte_pci_device *pci_dev)
{
RTE_SET_USED(pci_dev);
return 0;
}
static int
npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
{
RTE_SET_USED(pci_drv);
RTE_SET_USED(pci_dev);
return 0;
}
static const struct rte_pci_id npa_pci_map[] = {
{
.class_id = RTE_CLASS_ANY_ID,
.vendor_id = PCI_VENDOR_ID_CAVIUM,
.device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
.subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
},
{
.class_id = RTE_CLASS_ANY_ID,
.vendor_id = PCI_VENDOR_ID_CAVIUM,
.device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
.subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
},
{
.class_id = RTE_CLASS_ANY_ID,
.vendor_id = PCI_VENDOR_ID_CAVIUM,
.device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
.subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
},
{
.class_id = RTE_CLASS_ANY_ID,
.vendor_id = PCI_VENDOR_ID_CAVIUM,
.device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
.subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
},
{
.vendor_id = 0,
},
};
static struct rte_pci_driver npa_pci = {
.id_table = npa_pci_map,
.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
.probe = npa_probe,
.remove = npa_remove,
};
RTE_PMD_REGISTER_PCI(mempool_cnxk, npa_pci);
RTE_PMD_REGISTER_PCI_TABLE(mempool_cnxk, npa_pci_map);
RTE_PMD_REGISTER_KMOD_DEP(mempool_cnxk, "vfio-pci");

View File

@ -0,0 +1,13 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(C) 2021 Marvell.
#
if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
build = false
reason = 'only supported on 64-bit Linux'
subdir_done()
endif
sources = files('cnxk_mempool.c')
deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool']

View File

@ -0,0 +1,3 @@
INTERNAL {
local: *;
};

View File

@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation
drivers = ['bucket', 'dpaa', 'dpaa2', 'octeontx', 'octeontx2', 'ring', 'stack']
drivers = ['bucket', 'cnxk', 'dpaa', 'dpaa2', 'octeontx', 'octeontx2', 'ring',
'stack']
std_deps = ['mempool']