eventdev: introduce crypto adapter enqueue API

In case an event from a previous stage is required to be forwarded
to a crypto adapter and PMD supports internal event port in crypto
adapter, exposed via capability
RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have
a way to check in the API rte_event_enqueue_burst(), whether it is
for crypto adapter or for eth tx adapter.

Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(),
which can send to a crypto adapter.

Note that RTE_EVENT_TYPE_* cannot be used to make that decision,
as it is meant for event source and not event destination.
And event port designated for crypto adapter is designed to be used
for OP_NEW mode.

Hence, in order to support an event PMD which has an internal event port
in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed
via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD,
application should use rte_event_crypto_adapter_enqueue() API to enqueue
events.

When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode),
application can use API rte_event_enqueue_burst() as it was doing earlier,
i.e. retrieve event port used by crypto adapter and bind its event queues
to that port and enqueue events using the API rte_event_enqueue_burst().

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This commit is contained in:
Akhil Goyal 2021-04-15 14:43:49 +05:30 committed by Jerin Jacob
parent 808a17b3c1
commit f96a8ebb27
9 changed files with 150 additions and 27 deletions

View File

@ -30,3 +30,8 @@
[suppress_type]
name = rte_crypto_cipher_xform
has_data_member_inserted_between = {offset_after(key), offset_of(iv)}
; Ignore fields inserted in place of reserved fields of rte_eventdev
[suppress_type]
name = rte_eventdev
has_data_member_inserted_between = {offset_after(attached), end}

View File

@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed.
RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports
RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application
can directly submit the crypto operations to the cryptodev.
If not, application retrieves crypto adapter's event port using
rte_event_crypto_adapter_event_port_get() API. Then, links its event
queue to this port and starts enqueuing crypto operations as events
to the eventdev. The adapter then dequeues the events and submits the
crypto operations to the cryptodev. After the crypto completions, the
adapter enqueues events to the event device.
Application can use this mode, when ingress packet ordering is needed.
In this mode, events dequeued from the adapter will be treated as
forwarded events. The application needs to specify the cryptodev ID
and queue pair ID (request information) needed to enqueue a crypto
operation in addition to the event information (response information)
needed to enqueue an event after the crypto operation has completed.
In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto
PMD supports internal event port
(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should
use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as
events to crypto adapter. If not, application retrieves crypto adapter's event
port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event
queue to this port and starts enqueuing crypto operations as events to eventdev
using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and
submits the crypto operations to the cryptodev. After the crypto operation is
complete, the adapter enqueues events to the event device. The application can
use this mode when ingress packet ordering is needed. In this mode, events
dequeued from the adapter will be treated as forwarded events. The application
needs to specify the cryptodev ID and queue pair ID (request information) needed
to enqueue a crypto operation in addition to the event information (response
information) needed to enqueue an event after the crypto operation has
completed.
.. _figure_event_crypto_adapter_op_forward:
@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is
expected to fill the ``struct rte_event_crypto_adapter_conf`` structure
passed to it.
For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter
can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API.
Application can use this event port to link with event queue on which it
enqueues events towards the crypto adapter.
In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto
PMD supports internal event port
(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto
operations should be enqueued to the crypto adapter using
``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by
the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()``
API. An application can use this event port to link with an event queue, on
which it enqueues events towards the crypto adapter using
``rte_event_enqueue_burst()``.
.. code-block:: c
uint8_t id, evdev, crypto_ev_port_id, app_qid;
uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid;
struct rte_event ev;
uint32_t cap;
int ret;
ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id);
ret = rte_event_queue_setup(evdev, app_qid, NULL);
ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1);
// Fill in event info and update event_ptr with rte_crypto_op
memset(&ev, 0, sizeof(ev));
ev.queue_id = app_qid;
.
.
ev.event_ptr = op;
ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events);
ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap);
if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) {
ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id,
ev, nb_events);
} else {
ret = rte_event_crypto_adapter_event_port_get(id,
&crypto_ev_port_id);
ret = rte_event_queue_setup(evdev_id, app_qid, NULL);
ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid,
NULL, 1);
ev.queue_id = app_qid;
ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev,
nb_events);
}
Querying adapter capabilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -217,6 +217,13 @@ New Features
* Updated the ``ipsec-secgw`` sample application with UDP encapsulation
support for NAT Traversal.
* **Enhanced crypto adapter forward mode.**
* Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto
adapter if forward mode is supported by driver.
* Added support for crypto adapter forward mode in octeontx2 event and crypto
device driver.
Removed Items
-------------

View File

@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop,
lib.eventdev.crypto.stop)
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue,
lib.eventdev.crypto.enq)

View File

@ -171,6 +171,7 @@ extern "C" {
#include <stdint.h>
#include "rte_eventdev.h"
#include "eventdev_pmd.h"
/**
* Crypto event adapter mode
@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id);
int
rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
/**
* Enqueue a burst of crypto operations as event objects supplied in *rte_event*
* structure on an event crypto adapter designated by its event *dev_id* through
* the event port specified by *port_id*. This function is supported if the
* eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD
* capability flag set.
*
* The *nb_events* parameter is the number of event objects to enqueue which are
* supplied in the *ev* array of *rte_event* structure.
*
* The rte_event_crypto_adapter_enqueue() function returns the number of
* event objects it actually enqueued. A return value equal to *nb_events*
* means that all event objects have been enqueued.
*
* @param dev_id
* The identifier of the device.
* @param port_id
* The identifier of the event port.
* @param ev
* Points to an array of *nb_events* objects of type *rte_event* structure
* which contain the event object enqueue operations to be processed.
* @param nb_events
* The number of event objects to enqueue, typically number of
* rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...)
* available for this port.
*
* @return
* The number of event objects actually enqueued on the event device. The
* return value can be less than the value of the *nb_events* parameter when
* the event devices queue is full or if invalid parameters are specified in a
* *rte_event*. If the return value is less than *nb_events*, the remaining
* events at the end of ev[] are not consumed and the caller has to take care
* of them, and rte_errno is set accordingly. Possible errno values include:
* - EINVAL The port ID is invalid, device ID is invalid, an event's queue
* ID is invalid, or an event's sched type doesn't match the
* capabilities of the destination queue.
* - ENOSPC The event port was backpressured and unable to enqueue
* one or more events. This error code is only applicable to
* closed systems.
*/
static inline uint16_t
rte_event_crypto_adapter_enqueue(uint8_t dev_id,
uint8_t port_id,
struct rte_event ev[],
uint16_t nb_events)
{
const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
if (port_id >= dev->data->nb_ports) {
rte_errno = EINVAL;
return 0;
}
#endif
rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev,
nb_events);
return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events);
}
#ifdef __cplusplus
}
#endif

View File

@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port,
return 0;
}
static uint16_t
rte_event_crypto_adapter_enqueue(__rte_unused void *port,
__rte_unused struct rte_event ev[],
__rte_unused uint16_t nb_events)
{
rte_errno = ENOTSUP;
return 0;
}
struct rte_eventdev *
rte_event_pmd_allocate(const char *name, int socket_id)
{
@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id)
eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue;
eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue;
if (eventdev->data == NULL) {
struct rte_eventdev_data *eventdev_data = NULL;

View File

@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port,
* burst having same destination Ethernet port & Tx queue.
*/
typedef uint16_t (*event_crypto_adapter_enqueue)(void *port,
struct rte_event ev[], uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
#define RTE_EVENTDEV_NAME_MAX_LEN (64)
/**< @internal Max length of name of event PMD */
@ -1434,8 +1438,11 @@ struct rte_eventdev {
uint8_t attached : 1;
/**< Flag indicating the device is attached */
event_crypto_adapter_enqueue ca_enqueue;
/**< Pointer to PMD crypto adapter enqueue function. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
void *reserved_ptrs[4]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
} __rte_cache_aligned;
extern struct rte_eventdev *rte_eventdevs;

View File

@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_u8(flags);
)
RTE_TRACE_POINT_FP(
rte_eventdev_trace_crypto_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
uint16_t nb_events),
rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_u8(port_id);
rte_trace_point_emit_ptr(ev_table);
rte_trace_point_emit_u16(nb_events);
)
RTE_TRACE_POINT_FP(
rte_eventdev_trace_timer_arm_burst,
RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table,

View File

@ -143,6 +143,7 @@ EXPERIMENTAL {
rte_event_vector_pool_create;
rte_event_eth_rx_adapter_vector_limits_get;
rte_event_eth_rx_adapter_queue_event_vector_config;
__rte_eventdev_trace_crypto_adapter_enqueue;
};
INTERNAL {