raw/octeontx2_dma: remove driver

Removing the rawdev based octeontx2-dma driver as the dependent
common/octeontx2 will be soon be going away. Also a new DMA driver will
be coming in this place once the rte_dmadev library is in.

Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
This commit is contained in:
Radha Mohan Chintakuntla 2021-08-26 03:50:20 -07:00 committed by Thomas Monjalon
parent 8fa5d26839
commit a59745ebcc
12 changed files with 1 additions and 1097 deletions

View File

@ -1319,12 +1319,6 @@ M: Tomasz Duszynski <tduszynski@marvell.com>
F: doc/guides/rawdevs/cnxk_bphy.rst
F: drivers/raw/cnxk_bphy/
Marvell OCTEON TX2 DMA
M: Radha Mohan Chintakuntla <radhac@marvell.com>
M: Veerasenareddy Burru <vburru@marvell.com>
F: drivers/raw/octeontx2_dma/
F: doc/guides/rawdevs/octeontx2_dma.rst
Marvell OCTEON TX2 EP
M: Radha Mohan Chintakuntla <radhac@marvell.com>
M: Veerasenareddy Burru <vburru@marvell.com>

View File

@ -152,9 +152,6 @@ This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
#. **Event Device Driver**
See :doc:`../eventdevs/octeontx2` for SSO event device driver information.
#. **DMA Rawdev Driver**
See :doc:`../rawdevs/octeontx2_dma` for DMA driver information.
#. **Crypto Device Driver**
See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.

View File

@ -17,5 +17,4 @@ application through rawdev API.
ifpga
ioat
ntb
octeontx2_dma
octeontx2_ep

View File

@ -1,103 +0,0 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2019 Marvell International Ltd.
OCTEON TX2 DMA Driver
=====================
OCTEON TX2 has an internal DMA unit which can be used by applications to initiate
DMA transaction internally, from/to host when OCTEON TX2 operates in PCIe End
Point mode. The DMA PF function supports 8 VFs corresponding to 8 DMA queues.
Each DMA queue was exposed as a VF function when SRIOV enabled.
Features
--------
This DMA PMD supports below 3 modes of memory transfers
#. Internal - OCTEON TX2 DRAM to DRAM without core intervention
#. Inbound - Host DRAM to OCTEON TX2 DRAM without host/OCTEON TX2 cores involvement
#. Outbound - OCTEON TX2 DRAM to Host DRAM without host/OCTEON TX2 cores involvement
Prerequisites and Compilation procedure
---------------------------------------
See :doc:`../platform/octeontx2` for setup information.
Enabling logs
-------------
For enabling logs, use the following EAL parameter:
.. code-block:: console
./your_dma_application <EAL args> --log-level=pmd.raw.octeontx2.dpi,<level>
Using ``pmd.raw.octeontx2.dpi`` as log matching criteria, all Event PMD logs
can be enabled which are lower than logging ``level``.
Initialization
--------------
The number of DMA VFs (queues) enabled can be controlled by setting sysfs
entry, `sriov_numvfs` for the corresponding PF driver.
.. code-block:: console
echo <num_vfs> > /sys/bus/pci/drivers/octeontx2-dpi/0000\:05\:00.0/sriov_numvfs
Once the required VFs are enabled, to be accessible from DPDK, VFs need to be
bound to vfio-pci driver.
Device Setup
-------------
The OCTEON TX2 DPI DMA HW devices will need to be bound to a
user-space IO driver for use. The script ``dpdk-devbind.py`` script
included with DPDK can be used to view the state of the devices and to bind
them to a suitable DPDK-supported kernel driver. When querying the status
of the devices, they will appear under the category of "Misc (rawdev)
devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
used to see the state of those devices alone.
Device Configuration
--------------------
Configuring DMA rawdev device is done using the ``rte_rawdev_configure()``
API, which takes the mempool as parameter. PMD uses this pool to submit DMA
commands to HW.
The following code shows how the device is configured
.. code-block:: c
struct dpi_rawdev_conf_s conf = {0};
struct rte_rawdev_info rdev_info = {.dev_private = &conf};
conf.chunk_pool = (void *)rte_mempool_create_empty(...);
rte_mempool_set_ops_byname(conf.chunk_pool, rte_mbuf_platform_mempool_ops(), NULL);
rte_mempool_populate_default(conf.chunk_pool);
rte_rawdev_configure(dev_id, (rte_rawdev_obj_t)&rdev_info, sizeof(conf));
Performing Data Transfer
------------------------
To perform data transfer using OCTEON TX2 DMA rawdev devices use standard
``rte_rawdev_enqueue_buffers()`` and ``rte_rawdev_dequeue_buffers()`` APIs.
Self test
---------
On EAL initialization, dma devices will be probed and populated into the
raw devices. The rawdev ID of the device can be obtained using
* Invoke ``rte_rawdev_get_dev_id("DPI:x")`` from the application
where x is the VF device's bus id specified in "bus:device.func" format. Use this
index for further rawdev function calls.
* This PMD supports driver self test, to test DMA internal mode from test
application one can directly calls
``rte_rawdev_selftest(rte_rawdev_get_dev_id("DPI:x"))``

View File

@ -157,7 +157,7 @@ New Features
* :doc:`../nics/octeontx2`
* :doc:`../mempool/octeontx2`
* :doc:`../eventdevs/octeontx2`
* :doc:`../rawdevs/octeontx2_dma`
* ``rawdevs/octeontx2_dma``
* **Introduced the Intel NTB PMD.**

View File

@ -12,7 +12,6 @@ drivers = [
'ifpga',
'ioat',
'ntb',
'octeontx2_dma',
'octeontx2_ep',
'skeleton',
]

View File

@ -1,18 +0,0 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(C) 2019 Marvell International Ltd.
#
deps += ['bus_pci', 'common_octeontx2', 'rawdev']
sources = files('otx2_dpi_rawdev.c', 'otx2_dpi_msg.c', 'otx2_dpi_test.c')
extra_flags = []
# This integrated controller runs only on a arm64 machine, remove 32bit warnings
if not dpdk_conf.get('RTE_ARCH_64')
extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
endif
foreach flag: extra_flags
if cc.has_argument(flag)
cflags += flag
endif
endforeach

View File

@ -1,105 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
#ifndef _DPI_MSG_H_
#define _DPI_MSG_H_
#include <dirent.h>
#include <fcntl.h>
#include <string.h>
#include <unistd.h>
#include "otx2_dpi_rawdev.h"
/* DPI PF DBDF information macro's */
#define DPI_PF_DBDF_DOMAIN 0
#define DPI_PF_DBDF_BUS 5
#define DPI_PF_DBDF_DEVICE 0
#define DPI_PF_DBDF_FUNCTION 0
#define DPI_PF_MBOX_SYSFS_ENTRY "dpi_device_config"
union dpi_mbox_message_u {
uint64_t u[2];
struct dpi_mbox_message_s {
/* VF ID to configure */
uint64_t vfid :4;
/* Command code */
uint64_t cmd :4;
/* Command buffer size in 8-byte words */
uint64_t csize :14;
/* aura of the command buffer */
uint64_t aura :20;
/* SSO PF function */
uint64_t sso_pf_func :16;
/* NPA PF function */
uint64_t npa_pf_func :16;
} s;
};
static inline int
send_msg_to_pf(struct rte_pci_addr *pci, const char *value, int size)
{
char buff[255] = { 0 };
int res, fd;
res = snprintf(buff, sizeof(buff), "%s/" PCI_PRI_FMT "/%s",
rte_pci_get_sysfs_path(), pci->domain,
pci->bus, DPI_PF_DBDF_DEVICE & 0x7,
DPI_PF_DBDF_FUNCTION & 0x7, DPI_PF_MBOX_SYSFS_ENTRY);
if ((res < 0) || ((size_t)res > sizeof(buff)))
return -ERANGE;
fd = open(buff, O_WRONLY);
if (fd < 0)
return -EACCES;
res = write(fd, value, size);
close(fd);
if (res < 0)
return -EACCES;
return 0;
}
int
otx2_dpi_queue_open(struct dpi_vf_s *dpivf, uint32_t size, uint32_t gaura)
{
union dpi_mbox_message_u mbox_msg;
int ret = 0;
/* DPI PF driver expects vfid starts from index 0 */
mbox_msg.s.vfid = dpivf->vf_id;
mbox_msg.s.cmd = DPI_QUEUE_OPEN;
mbox_msg.s.csize = size;
mbox_msg.s.aura = gaura;
mbox_msg.s.sso_pf_func = otx2_sso_pf_func_get();
mbox_msg.s.npa_pf_func = otx2_npa_pf_func_get();
ret = send_msg_to_pf(&dpivf->dev->addr, (const char *)&mbox_msg,
sizeof(mbox_msg));
if (ret < 0)
otx2_dpi_dbg("Failed to send mbox message to dpi pf");
return ret;
}
int
otx2_dpi_queue_close(struct dpi_vf_s *dpivf)
{
union dpi_mbox_message_u mbox_msg;
int ret = 0;
/* DPI PF driver expects vfid starts from index 0 */
mbox_msg.s.vfid = dpivf->vf_id;
mbox_msg.s.cmd = DPI_QUEUE_CLOSE;
ret = send_msg_to_pf(&dpivf->dev->addr, (const char *)&mbox_msg,
sizeof(mbox_msg));
if (ret < 0)
otx2_dpi_dbg("Failed to send mbox message to dpi pf");
return ret;
}
#endif /* _DPI_MSG_H_ */

View File

@ -1,441 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
#include <string.h>
#include <unistd.h>
#include <rte_bus.h>
#include <rte_bus_pci.h>
#include <rte_common.h>
#include <rte_eal.h>
#include <rte_lcore.h>
#include <rte_mempool.h>
#include <rte_pci.h>
#include <rte_rawdev.h>
#include <rte_rawdev_pmd.h>
#include <otx2_common.h>
#include "otx2_dpi_rawdev.h"
static const struct rte_pci_id pci_dma_map[] = {
{
RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
PCI_DEVID_OCTEONTX2_DPI_VF)
},
{
.vendor_id = 0,
},
};
/* Enable/Disable DMA queue */
static inline int
dma_engine_enb_dis(struct dpi_vf_s *dpivf, const bool enb)
{
if (enb)
otx2_write64(0x1, dpivf->vf_bar0 + DPI_VDMA_EN);
else
otx2_write64(0x0, dpivf->vf_bar0 + DPI_VDMA_EN);
return DPI_DMA_QUEUE_SUCCESS;
}
/* Free DMA Queue instruction buffers, and send close notification to PF */
static inline int
dma_queue_finish(struct dpi_vf_s *dpivf)
{
uint32_t timeout = 0, sleep = 1;
uint64_t reg = 0ULL;
/* Wait for SADDR to become idle */
reg = otx2_read64(dpivf->vf_bar0 + DPI_VDMA_SADDR);
while (!(reg & BIT_ULL(DPI_VDMA_SADDR_REQ_IDLE))) {
rte_delay_ms(sleep);
timeout++;
if (timeout >= DPI_QFINISH_TIMEOUT) {
otx2_dpi_dbg("Timeout!!! Closing Forcibly");
break;
}
reg = otx2_read64(dpivf->vf_bar0 + DPI_VDMA_SADDR);
}
if (otx2_dpi_queue_close(dpivf) < 0)
return -EACCES;
rte_mempool_put(dpivf->chunk_pool, dpivf->base_ptr);
dpivf->vf_bar0 = (uintptr_t)NULL;
return DPI_DMA_QUEUE_SUCCESS;
}
/* Write an arbitrary number of command words to a command queue */
static __rte_always_inline enum dpi_dma_queue_result_e
dma_queue_write(struct dpi_vf_s *dpi, uint16_t cmd_count, uint64_t *cmds)
{
if ((cmd_count < 1) || (cmd_count > 64))
return DPI_DMA_QUEUE_INVALID_PARAM;
if (cmds == NULL)
return DPI_DMA_QUEUE_INVALID_PARAM;
/* Room available in the current buffer for the command */
if (dpi->index + cmd_count < dpi->pool_size_m1) {
uint64_t *ptr = dpi->base_ptr;
ptr += dpi->index;
dpi->index += cmd_count;
while (cmd_count--)
*ptr++ = *cmds++;
} else {
void *new_buffer;
uint64_t *ptr;
int count;
/* Allocate new command buffer, return if failed */
if (rte_mempool_get(dpi->chunk_pool, &new_buffer) ||
new_buffer == NULL) {
return DPI_DMA_QUEUE_NO_MEMORY;
}
ptr = dpi->base_ptr;
/* Figure out how many command words will fit in this buffer.
* One location will be needed for the next buffer pointer.
**/
count = dpi->pool_size_m1 - dpi->index;
ptr += dpi->index;
cmd_count -= count;
while (count--)
*ptr++ = *cmds++;
/* Chunk next ptr is 2DWORDs, second DWORD is reserved. */
*ptr++ = (uint64_t)new_buffer;
*ptr = 0;
/* The current buffer is full and has a link to the next buffer.
* Time to write the rest of the commands into the new buffer.
**/
dpi->base_ptr = new_buffer;
dpi->index = cmd_count;
ptr = new_buffer;
while (cmd_count--)
*ptr++ = *cmds++;
/* queue index may greater than pool size */
if (dpi->index >= dpi->pool_size_m1) {
if (rte_mempool_get(dpi->chunk_pool, &new_buffer) ||
new_buffer == NULL) {
return DPI_DMA_QUEUE_NO_MEMORY;
}
/* Write next buffer address */
*ptr = (uint64_t)new_buffer;
dpi->base_ptr = new_buffer;
dpi->index = 0;
}
}
return DPI_DMA_QUEUE_SUCCESS;
}
/* Submit a DMA command to the DMA queues. */
static __rte_always_inline int
dma_queue_submit(struct rte_rawdev *dev, uint16_t cmd_count, uint64_t *cmds)
{
struct dpi_vf_s *dpivf = dev->dev_private;
enum dpi_dma_queue_result_e result;
result = dma_queue_write(dpivf, cmd_count, cmds);
rte_wmb();
if (likely(result == DPI_DMA_QUEUE_SUCCESS))
otx2_write64((uint64_t)cmd_count,
dpivf->vf_bar0 + DPI_VDMA_DBELL);
return result;
}
/* Enqueue buffers to DMA queue
* returns number of buffers enqueued successfully
*/
static int
otx2_dpi_rawdev_enqueue_bufs(struct rte_rawdev *dev,
struct rte_rawdev_buf **buffers,
unsigned int count, rte_rawdev_obj_t context)
{
struct dpi_dma_queue_ctx_s *ctx = (struct dpi_dma_queue_ctx_s *)context;
struct dpi_dma_buf_ptr_s *cmd;
uint32_t c = 0;
for (c = 0; c < count; c++) {
uint64_t dpi_cmd[DPI_DMA_CMD_SIZE] = {0};
union dpi_dma_instr_hdr_u *hdr;
uint16_t index = 0, i;
hdr = (union dpi_dma_instr_hdr_u *)&dpi_cmd[0];
cmd = (struct dpi_dma_buf_ptr_s *)buffers[c]->buf_addr;
hdr->s.xtype = ctx->xtype & DPI_XTYPE_MASK;
hdr->s.pt = ctx->pt & DPI_HDR_PT_MASK;
/* Request initiated with byte write completion, but completion
* pointer not provided
*/
if ((hdr->s.pt == DPI_HDR_PT_ZBW_CA ||
hdr->s.pt == DPI_HDR_PT_ZBW_NC) && cmd->comp_ptr == NULL)
return c;
cmd->comp_ptr->cdata = DPI_REQ_CDATA;
hdr->s.ptr = (uint64_t)cmd->comp_ptr;
hdr->s.deallocv = ctx->deallocv;
hdr->s.tt = ctx->tt & DPI_W0_TT_MASK;
hdr->s.grp = ctx->grp & DPI_W0_GRP_MASK;
/* If caller provides completion ring details, then only queue
* completion address for later polling.
*/
if (ctx->c_ring) {
ctx->c_ring->compl_data[ctx->c_ring->tail] =
cmd->comp_ptr;
STRM_INC(ctx->c_ring);
}
if (hdr->s.deallocv)
hdr->s.pvfe = 1;
if (hdr->s.pt == DPI_HDR_PT_WQP)
hdr->s.ptr = hdr->s.ptr | DPI_HDR_PT_WQP_STATUSNC;
index += 4;
hdr->s.fport = 0;
hdr->s.lport = 0;
if (ctx->xtype != DPI_XTYPE_INTERNAL_ONLY)
hdr->s.lport = ctx->pem_id;
/* For inbound case, src pointers are last pointers.
* For all other cases, src pointers are first pointers.
*/
if (ctx->xtype == DPI_XTYPE_INBOUND) {
hdr->s.nfst = cmd->wptr_cnt & DPI_MAX_POINTER;
hdr->s.nlst = cmd->rptr_cnt & DPI_MAX_POINTER;
for (i = 0; i < hdr->s.nfst; i++) {
dpi_cmd[index++] = cmd->wptr[i]->u[0];
dpi_cmd[index++] = cmd->wptr[i]->u[1];
}
for (i = 0; i < hdr->s.nlst; i++) {
dpi_cmd[index++] = cmd->rptr[i]->u[0];
dpi_cmd[index++] = cmd->rptr[i]->u[1];
}
} else {
hdr->s.nfst = cmd->rptr_cnt & DPI_MAX_POINTER;
hdr->s.nlst = cmd->wptr_cnt & DPI_MAX_POINTER;
for (i = 0; i < hdr->s.nfst; i++) {
dpi_cmd[index++] = cmd->rptr[i]->u[0];
dpi_cmd[index++] = cmd->rptr[i]->u[1];
}
for (i = 0; i < hdr->s.nlst; i++) {
dpi_cmd[index++] = cmd->wptr[i]->u[0];
dpi_cmd[index++] = cmd->wptr[i]->u[1];
}
}
if (dma_queue_submit(dev, index, dpi_cmd))
return c;
}
return c;
}
/* Check for command completion, returns number of commands completed */
static int
otx2_dpi_rawdev_dequeue_bufs(struct rte_rawdev *dev __rte_unused,
struct rte_rawdev_buf **buffers,
unsigned int count, rte_rawdev_obj_t context)
{
struct dpi_dma_queue_ctx_s *ctx = (struct dpi_dma_queue_ctx_s *)context;
unsigned int i = 0, headp;
/* No completion ring to poll */
if (ctx->c_ring == NULL)
return 0;
headp = ctx->c_ring->head;
for (i = 0; i < count && (headp != ctx->c_ring->tail); i++) {
struct dpi_dma_req_compl_s *comp_ptr =
ctx->c_ring->compl_data[headp];
if (comp_ptr->cdata)
break;
/* Request Completed */
buffers[i] = (void *)comp_ptr;
headp = (headp + 1) % ctx->c_ring->max_cnt;
}
ctx->c_ring->head = headp;
return i;
}
static int
otx2_dpi_rawdev_start(struct rte_rawdev *dev)
{
dev->started = DPI_QUEUE_START;
return DPI_DMA_QUEUE_SUCCESS;
}
static void
otx2_dpi_rawdev_stop(struct rte_rawdev *dev)
{
dev->started = DPI_QUEUE_STOP;
}
static int
otx2_dpi_rawdev_close(struct rte_rawdev *dev)
{
dma_engine_enb_dis(dev->dev_private, false);
dma_queue_finish(dev->dev_private);
return DPI_DMA_QUEUE_SUCCESS;
}
static int
otx2_dpi_rawdev_reset(struct rte_rawdev *dev)
{
return dev ? DPI_QUEUE_STOP : DPI_QUEUE_START;
}
static int
otx2_dpi_rawdev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config,
size_t config_size)
{
struct dpi_rawdev_conf_s *conf = config;
struct dpi_vf_s *dpivf = NULL;
void *buf = NULL;
uintptr_t pool;
uint32_t gaura;
if (conf == NULL || config_size != sizeof(*conf)) {
otx2_dpi_dbg("NULL or invalid configuration");
return -EINVAL;
}
dpivf = (struct dpi_vf_s *)dev->dev_private;
dpivf->chunk_pool = conf->chunk_pool;
if (rte_mempool_get(conf->chunk_pool, &buf) || (buf == NULL)) {
otx2_err("Unable allocate buffer");
return -ENODEV;
}
dpivf->base_ptr = buf;
otx2_write64(0x0, dpivf->vf_bar0 + DPI_VDMA_EN);
dpivf->pool_size_m1 = (DPI_CHUNK_SIZE >> 3) - 2;
pool = (uintptr_t)((struct rte_mempool *)conf->chunk_pool)->pool_id;
gaura = npa_lf_aura_handle_to_aura(pool);
otx2_write64(0, dpivf->vf_bar0 + DPI_VDMA_REQQ_CTL);
otx2_write64(((uint64_t)buf >> 7) << 7,
dpivf->vf_bar0 + DPI_VDMA_SADDR);
if (otx2_dpi_queue_open(dpivf, DPI_CHUNK_SIZE, gaura) < 0) {
otx2_err("Unable to open DPI VF %d", dpivf->vf_id);
rte_mempool_put(conf->chunk_pool, buf);
return -EACCES;
}
dma_engine_enb_dis(dpivf, true);
return DPI_DMA_QUEUE_SUCCESS;
}
static const struct rte_rawdev_ops dpi_rawdev_ops = {
.dev_configure = otx2_dpi_rawdev_configure,
.dev_start = otx2_dpi_rawdev_start,
.dev_stop = otx2_dpi_rawdev_stop,
.dev_close = otx2_dpi_rawdev_close,
.dev_reset = otx2_dpi_rawdev_reset,
.enqueue_bufs = otx2_dpi_rawdev_enqueue_bufs,
.dequeue_bufs = otx2_dpi_rawdev_dequeue_bufs,
.dev_selftest = test_otx2_dma_rawdev,
};
static int
otx2_dpi_rawdev_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
{
char name[RTE_RAWDEV_NAME_MAX_LEN];
struct dpi_vf_s *dpivf = NULL;
struct rte_rawdev *rawdev;
uint16_t vf_id;
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return DPI_DMA_QUEUE_SUCCESS;
if (pci_dev->mem_resource[0].addr == NULL) {
otx2_dpi_dbg("Empty bars %p %p", pci_dev->mem_resource[0].addr,
pci_dev->mem_resource[2].addr);
return -ENODEV;
}
memset(name, 0, sizeof(name));
snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "DPI:%x:%02x.%x",
pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
/* Allocate device structure */
rawdev = rte_rawdev_pmd_allocate(name, sizeof(struct dpi_vf_s),
rte_socket_id());
if (rawdev == NULL) {
otx2_err("Rawdev allocation failed");
return -EINVAL;
}
rawdev->dev_ops = &dpi_rawdev_ops;
rawdev->device = &pci_dev->device;
rawdev->driver_name = pci_dev->driver->driver.name;
dpivf = rawdev->dev_private;
if (dpivf->state != DPI_QUEUE_STOP) {
otx2_dpi_dbg("Device already started!!!");
return -ENODEV;
}
vf_id = ((pci_dev->addr.devid & 0x1F) << 3) |
(pci_dev->addr.function & 0x7);
vf_id -= 1;
dpivf->dev = pci_dev;
dpivf->state = DPI_QUEUE_START;
dpivf->vf_id = vf_id;
dpivf->vf_bar0 = (uintptr_t)pci_dev->mem_resource[0].addr;
dpivf->vf_bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
return DPI_DMA_QUEUE_SUCCESS;
}
static int
otx2_dpi_rawdev_remove(struct rte_pci_device *pci_dev)
{
char name[RTE_RAWDEV_NAME_MAX_LEN];
struct rte_rawdev *rawdev;
struct dpi_vf_s *dpivf;
if (pci_dev == NULL) {
otx2_dpi_dbg("Invalid pci_dev of the device!");
return -EINVAL;
}
memset(name, 0, sizeof(name));
snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "DPI:%x:%02x.%x",
pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
rawdev = rte_rawdev_pmd_get_named_dev(name);
if (rawdev == NULL) {
otx2_dpi_dbg("Invalid device name (%s)", name);
return -EINVAL;
}
dpivf = (struct dpi_vf_s *)rawdev->dev_private;
dma_engine_enb_dis(dpivf, false);
dma_queue_finish(dpivf);
/* rte_rawdev_close is called by pmd_release */
return rte_rawdev_pmd_release(rawdev);
}
static struct rte_pci_driver rte_dpi_rawdev_pmd = {
.id_table = pci_dma_map,
.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
.probe = otx2_dpi_rawdev_probe,
.remove = otx2_dpi_rawdev_remove,
};
RTE_PMD_REGISTER_PCI(dpi_rawdev_pci_driver, rte_dpi_rawdev_pmd);
RTE_PMD_REGISTER_PCI_TABLE(dpi_rawdev_pci_driver, pci_dma_map);
RTE_PMD_REGISTER_KMOD_DEP(dpi_rawdev_pci_driver, "vfio-pci");

View File

@ -1,197 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
#ifndef _DPI_RAWDEV_H_
#define _DPI_RAWDEV_H_
#include "otx2_common.h"
#include "otx2_mempool.h"
#define DPI_QUEUE_OPEN 0x1
#define DPI_QUEUE_CLOSE 0x2
/* DPI VF register offsets from VF_BAR0 */
#define DPI_VDMA_EN (0x0)
#define DPI_VDMA_REQQ_CTL (0x8)
#define DPI_VDMA_DBELL (0x10)
#define DPI_VDMA_SADDR (0x18)
#define DPI_VDMA_COUNTS (0x20)
#define DPI_VDMA_NADDR (0x28)
#define DPI_VDMA_IWBUSY (0x30)
#define DPI_VDMA_CNT (0x38)
#define DPI_VF_INT (0x100)
#define DPI_VF_INT_W1S (0x108)
#define DPI_VF_INT_ENA_W1C (0x110)
#define DPI_VF_INT_ENA_W1S (0x118)
#define DPI_MAX_VFS 8
#define DPI_DMA_CMD_SIZE 64
#define DPI_CHUNK_SIZE 1024
#define DPI_QUEUE_STOP 0x0
#define DPI_QUEUE_START 0x1
#define DPI_VDMA_SADDR_REQ_IDLE 63
#define DPI_MAX_POINTER 15
#define STRM_INC(s) ((s)->tail = ((s)->tail + 1) % (s)->max_cnt)
#define DPI_QFINISH_TIMEOUT (10 * 1000)
/* DPI Transfer Type, pointer type in DPI_DMA_INSTR_HDR_S[XTYPE] */
#define DPI_XTYPE_OUTBOUND (0)
#define DPI_XTYPE_INBOUND (1)
#define DPI_XTYPE_INTERNAL_ONLY (2)
#define DPI_XTYPE_EXTERNAL_ONLY (3)
#define DPI_XTYPE_MASK 0x3
#define DPI_HDR_PT_ZBW_CA 0x0
#define DPI_HDR_PT_ZBW_NC 0x1
#define DPI_HDR_PT_WQP 0x2
#define DPI_HDR_PT_WQP_NOSTATUS 0x0
#define DPI_HDR_PT_WQP_STATUSCA 0x1
#define DPI_HDR_PT_WQP_STATUSNC 0x3
#define DPI_HDR_PT_CNT 0x3
#define DPI_HDR_PT_MASK 0x3
#define DPI_W0_TT_MASK 0x3
#define DPI_W0_GRP_MASK 0x3FF
/* Set Completion data to 0xFF when request submitted,
* upon successful request completion engine reset to completion status
*/
#define DPI_REQ_CDATA 0xFF
struct dpi_vf_s {
struct rte_pci_device *dev;
uint8_t state;
uint16_t vf_id;
uint8_t domain;
uintptr_t vf_bar0;
uintptr_t vf_bar2;
uint16_t pool_size_m1;
uint16_t index;
uint64_t *base_ptr;
void *chunk_pool;
struct otx2_mbox *mbox;
};
struct dpi_rawdev_conf_s {
void *chunk_pool;
};
enum dpi_dma_queue_result_e {
DPI_DMA_QUEUE_SUCCESS = 0,
DPI_DMA_QUEUE_NO_MEMORY = -1,
DPI_DMA_QUEUE_INVALID_PARAM = -2,
};
struct dpi_dma_req_compl_s {
uint64_t cdata;
void (*compl_cb)(void *dev, void *arg);
void *cb_data;
};
union dpi_dma_ptr_u {
uint64_t u[2];
struct dpi_dma_s {
uint64_t length:16;
uint64_t reserved:44;
uint64_t bed:1; /* Big-Endian */
uint64_t alloc_l2:1;
uint64_t full_write:1;
uint64_t invert:1;
uint64_t ptr;
} s;
};
struct dpi_dma_buf_ptr_s {
union dpi_dma_ptr_u *rptr[DPI_MAX_POINTER]; /* Read From pointer list */
union dpi_dma_ptr_u *wptr[DPI_MAX_POINTER]; /* Write to pointer list */
uint8_t rptr_cnt;
uint8_t wptr_cnt;
struct dpi_dma_req_compl_s *comp_ptr;
};
struct dpi_cring_data_s {
struct dpi_dma_req_compl_s **compl_data;
uint16_t max_cnt;
uint16_t head;
uint16_t tail;
};
struct dpi_dma_queue_ctx_s {
uint16_t xtype:2;
/* Completion pointer type */
uint16_t pt:2;
/* Completion updated using WQE */
uint16_t tt:2;
uint16_t grp:10;
uint32_t tag;
/* Valid only for Outbound only mode */
uint16_t aura:12;
uint16_t csel:1;
uint16_t ca:1;
uint16_t fi:1;
uint16_t ii:1;
uint16_t fl:1;
uint16_t pvfe:1;
uint16_t dealloce:1;
uint16_t req_type:2;
uint16_t use_lock:1;
uint16_t deallocv;
uint16_t pem_id;
struct dpi_cring_data_s *c_ring;
};
/* DPI DMA Instruction Header Format */
union dpi_dma_instr_hdr_u {
uint64_t u[4];
struct dpi_dma_instr_hdr_s_s {
uint64_t tag:32;
uint64_t tt:2;
uint64_t grp:10;
uint64_t reserved_44_47:4;
uint64_t nfst:4;
uint64_t reserved_52_53:2;
uint64_t nlst:4;
uint64_t reserved_58_63:6;
/* Word 0 - End */
uint64_t aura:12;
uint64_t reserved_76_79:4;
uint64_t deallocv:16;
uint64_t dealloce:1;
uint64_t pvfe:1;
uint64_t reserved_98_99:2;
uint64_t pt:2;
uint64_t reserved_102_103:2;
uint64_t fl:1;
uint64_t ii:1;
uint64_t fi:1;
uint64_t ca:1;
uint64_t csel:1;
uint64_t reserved_109_111:3;
uint64_t xtype:2;
uint64_t reserved_114_119:6;
uint64_t fport:2;
uint64_t reserved_122_123:2;
uint64_t lport:2;
uint64_t reserved_126_127:2;
/* Word 1 - End */
uint64_t ptr:64;
/* Word 2 - End */
uint64_t reserved_192_255:64;
/* Word 3 - End */
} s;
};
int otx2_dpi_queue_open(struct dpi_vf_s *dpivf, uint32_t size, uint32_t gaura);
int otx2_dpi_queue_close(struct dpi_vf_s *dpivf);
int test_otx2_dma_rawdev(uint16_t val);
#endif /* _DPI_RAWDEV_H_ */

View File

@ -1,218 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <rte_common.h>
#include <rte_debug.h>
#include <rte_eal.h>
#include <rte_log.h>
#include <rte_malloc.h>
#include <rte_mbuf.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_memcpy.h>
#include <rte_memory.h>
#include <rte_mempool.h>
#include <rte_per_lcore.h>
#include <rte_rawdev.h>
#include "otx2_dpi_rawdev.h"
static struct dpi_cring_data_s cring;
static uint8_t
buffer_fill(uint8_t *addr, int len, uint8_t val)
{
int j = 0;
memset(addr, 0, len);
for (j = 0; j < len; j++)
*(addr + j) = val++;
return val;
}
static int
validate_buffer(uint8_t *saddr, uint8_t *daddr, int len)
{
int j = 0, ret = 0;
for (j = 0; j < len; j++) {
if (*(saddr + j) != *(daddr + j)) {
otx2_dpi_dbg("FAIL: Data Integrity failed");
otx2_dpi_dbg("index: %d, Expected: 0x%x, Actual: 0x%x",
j, *(saddr + j), *(daddr + j));
ret = -1;
break;
}
}
return ret;
}
static inline int
dma_test_internal(int dma_port, int buf_size)
{
struct dpi_dma_req_compl_s *comp_data;
struct dpi_dma_queue_ctx_s ctx = {0};
struct rte_rawdev_buf buf = {0};
struct rte_rawdev_buf *d_buf[1];
struct rte_rawdev_buf *bufp[1];
struct dpi_dma_buf_ptr_s cmd;
union dpi_dma_ptr_u rptr = { {0} };
union dpi_dma_ptr_u wptr = { {0} };
uint8_t *fptr, *lptr;
int ret;
fptr = (uint8_t *)rte_malloc("dummy", buf_size, 128);
lptr = (uint8_t *)rte_malloc("dummy", buf_size, 128);
comp_data = rte_malloc("dummy", buf_size, 128);
if (fptr == NULL || lptr == NULL || comp_data == NULL) {
otx2_dpi_dbg("Unable to allocate internal memory");
return -ENOMEM;
}
buffer_fill(fptr, buf_size, 0);
memset(&cmd, 0, sizeof(struct dpi_dma_buf_ptr_s));
memset(lptr, 0, buf_size);
memset(comp_data, 0, buf_size);
rptr.s.ptr = (uint64_t)fptr;
rptr.s.length = buf_size;
wptr.s.ptr = (uint64_t)lptr;
wptr.s.length = buf_size;
cmd.rptr[0] = &rptr;
cmd.wptr[0] = &wptr;
cmd.rptr_cnt = 1;
cmd.wptr_cnt = 1;
cmd.comp_ptr = comp_data;
buf.buf_addr = (void *)&cmd;
bufp[0] = &buf;
ctx.xtype = DPI_XTYPE_INTERNAL_ONLY;
ctx.pt = 0;
ctx.c_ring = &cring;
ret = rte_rawdev_enqueue_buffers(dma_port,
(struct rte_rawdev_buf **)bufp, 1,
&ctx);
if (ret < 0) {
otx2_dpi_dbg("Enqueue request failed");
return 0;
}
/* Wait and dequeue completion */
do {
sleep(1);
ret = rte_rawdev_dequeue_buffers(dma_port, &d_buf[0], 1, &ctx);
if (ret)
break;
otx2_dpi_dbg("Dequeue request not completed");
} while (1);
if (validate_buffer(fptr, lptr, buf_size)) {
otx2_dpi_dbg("DMA transfer failed\n");
return -EAGAIN;
}
otx2_dpi_dbg("Internal Only DMA transfer successfully completed");
if (lptr)
rte_free(lptr);
if (fptr)
rte_free(fptr);
if (comp_data)
rte_free(comp_data);
return 0;
}
static void *
dpi_create_mempool(void)
{
void *chunk_pool = NULL;
char pool_name[25];
int ret;
snprintf(pool_name, sizeof(pool_name), "dpi_chunk_pool");
chunk_pool = (void *)rte_mempool_create_empty(pool_name, 1024, 1024,
0, 0, rte_socket_id(), 0);
if (chunk_pool == NULL) {
otx2_dpi_dbg("Unable to create memory pool.");
return NULL;
}
ret = rte_mempool_set_ops_byname(chunk_pool,
rte_mbuf_platform_mempool_ops(), NULL);
if (ret < 0) {
otx2_dpi_dbg("Unable to set pool ops");
rte_mempool_free(chunk_pool);
return NULL;
}
ret = rte_mempool_populate_default(chunk_pool);
if (ret < 0) {
otx2_dpi_dbg("Unable to populate pool");
return NULL;
}
return chunk_pool;
}
int
test_otx2_dma_rawdev(uint16_t val)
{
struct rte_rawdev_info rdev_info = {0};
struct dpi_rawdev_conf_s conf = {0};
int ret, i, size = 1024;
int nb_ports;
RTE_SET_USED(val);
nb_ports = rte_rawdev_count();
if (nb_ports == 0) {
otx2_dpi_dbg("No Rawdev ports - bye");
return -ENODEV;
}
i = rte_rawdev_get_dev_id("DPI:5:00.1");
/* Configure rawdev ports */
conf.chunk_pool = dpi_create_mempool();
rdev_info.dev_private = &conf;
ret = rte_rawdev_configure(i, (rte_rawdev_obj_t)&rdev_info,
sizeof(conf));
if (ret) {
otx2_dpi_dbg("Unable to configure DPIVF %d", i);
return -ENODEV;
}
otx2_dpi_dbg("rawdev %d configured successfully", i);
/* Each stream allocate its own completion ring data, store it in
* application context. Each stream needs to use same application
* context for enqueue/dequeue.
*/
cring.compl_data = rte_malloc("dummy", sizeof(void *) * 1024, 128);
if (!cring.compl_data) {
otx2_dpi_dbg("Completion allocation failed");
return -ENOMEM;
}
cring.max_cnt = 1024;
cring.head = 0;
cring.tail = 0;
ret = dma_test_internal(i, size);
if (ret)
otx2_dpi_dbg("DMA transfer failed for queue %d", i);
if (rte_rawdev_close(i))
otx2_dpi_dbg("Dev close failed for port %d", i);
if (conf.chunk_pool)
rte_mempool_free(conf.chunk_pool);
return ret;
}

View File

@ -1,3 +0,0 @@
DPDK_22 {
local: *;
};