net/cnxk: support inline security setup for cn10k
Add support for inline inbound and outbound IPSec for SA create, destroy and other NIX / CPT LF configurations. This patch also changes dpdk-devbind.py to list new inline device as misc device. Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
This commit is contained in:
parent
7eabd6c637
commit
69daa9e502
@ -34,6 +34,7 @@ Features of the CNXK Ethdev PMD are:
|
||||
- Vector Poll mode driver
|
||||
- Debug utilities - Context dump and error interrupt support
|
||||
- Support Rx interrupt
|
||||
- Inline IPsec processing support
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
@ -185,6 +186,74 @@ Runtime Config Options
|
||||
|
||||
-a 0002:02:00.0,tag_as_xor=1
|
||||
|
||||
- ``Max SPI for inbound inline IPsec`` (default ``255``)
|
||||
|
||||
Max SPI supported for inbound inline IPsec processing can be specified by
|
||||
``ipsec_in_max_spi`` ``devargs`` parameter.
|
||||
|
||||
For example::
|
||||
|
||||
-a 0002:02:00.0,ipsec_in_max_spi=128
|
||||
|
||||
With the above configuration, application can enable inline IPsec processing
|
||||
for 128 inbound SAs (SPI 0-127).
|
||||
|
||||
- ``Max SA's for outbound inline IPsec`` (default ``4096``)
|
||||
|
||||
Max number of SA's supported for outbound inline IPsec processing can be
|
||||
specified by ``ipsec_out_max_sa`` ``devargs`` parameter.
|
||||
|
||||
For example::
|
||||
|
||||
-a 0002:02:00.0,ipsec_out_max_sa=128
|
||||
|
||||
With the above configuration, application can enable inline IPsec processing
|
||||
for 128 outbound SAs.
|
||||
|
||||
- ``Outbound CPT LF queue size`` (default ``8200``)
|
||||
|
||||
Size of Outbound CPT LF queue in number of descriptors can be specified by
|
||||
``outb_nb_desc`` ``devargs`` parameter.
|
||||
|
||||
For example::
|
||||
|
||||
-a 0002:02:00.0,outb_nb_desc=16384
|
||||
|
||||
With the above configuration, Outbound CPT LF will be created to accommodate
|
||||
at max 16384 descriptors at any given time.
|
||||
|
||||
- ``Outbound CPT LF count`` (default ``1``)
|
||||
|
||||
Number of CPT LF's to attach for Outbound processing can be specified by
|
||||
``outb_nb_crypto_qs`` ``devargs`` parameter.
|
||||
|
||||
For example::
|
||||
|
||||
-a 0002:02:00.0,outb_nb_crypto_qs=2
|
||||
|
||||
With the above confiuration, two CPT LF's are setup and distributed among
|
||||
all the Tx queues for outbound processing.
|
||||
|
||||
- ``Force using inline ipsec device for inbound`` (default ``0``)
|
||||
|
||||
In CN10K, in event mode, driver can work in two modes,
|
||||
|
||||
1. Inbound encrypted traffic received by probed ipsec inline device while
|
||||
plain traffic post decryption is received by ethdev.
|
||||
|
||||
2. Both Inbound encrypted traffic and plain traffic post decryption are
|
||||
received by ethdev.
|
||||
|
||||
By default event mode works without using inline device i.e mode ``2``.
|
||||
This behaviour can be changed to pick mode ``1`` by using
|
||||
``force_inb_inl_dev`` ``devargs`` parameter.
|
||||
|
||||
For example::
|
||||
|
||||
-a 0002:02:00.0,force_inb_inl_dev=1 -a 0002:03:00.0,force_inb_inl_dev=1
|
||||
|
||||
With the above configuration, inbound encrypted traffic from both the ports
|
||||
is received by ipsec inline device.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -250,6 +319,39 @@ Example usage in testpmd::
|
||||
testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
|
||||
spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
|
||||
|
||||
Inline device support for CN10K
|
||||
-------------------------------
|
||||
|
||||
CN10K HW provides a misc device Inline device that supports ethernet devices in
|
||||
providing following features.
|
||||
|
||||
- Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet
|
||||
devices to be processed by the single inline IPSec device. This allows
|
||||
single rte security session to accept traffic from multiple ports.
|
||||
|
||||
- Support for event generation on outbound inline IPsec processing errors.
|
||||
|
||||
- Support CN106xx poll mode of operation for inline IPSec inbound processing.
|
||||
|
||||
Inline IPsec device is identified by PCI PF vendid:devid ``177D:A0F0`` or
|
||||
VF ``177D:A0F1``.
|
||||
|
||||
Runtime Config Options for inline device
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- ``Max SPI for inbound inline IPsec`` (default ``255``)
|
||||
|
||||
Max SPI supported for inbound inline IPsec processing can be specified by
|
||||
``ipsec_in_max_spi`` ``devargs`` parameter.
|
||||
|
||||
For example::
|
||||
|
||||
-a 0002:1d:00.0,ipsec_in_max_spi=128
|
||||
|
||||
With the above configuration, application can enable inline IPsec processing
|
||||
for 128 inbound SAs (SPI 0-127) for traffic aggregated on inline device.
|
||||
|
||||
|
||||
Debugging Options
|
||||
-----------------
|
||||
|
||||
|
@ -27,6 +27,7 @@ RSS hash = Y
|
||||
RSS key update = Y
|
||||
RSS reta update = Y
|
||||
Inner RSS = Y
|
||||
Inline protocol = Y
|
||||
Flow control = Y
|
||||
Jumbo frame = Y
|
||||
Scattered Rx = Y
|
||||
|
@ -26,6 +26,7 @@ RSS hash = Y
|
||||
RSS key update = Y
|
||||
RSS reta update = Y
|
||||
Inner RSS = Y
|
||||
Inline protocol = Y
|
||||
Flow control = Y
|
||||
Jumbo frame = Y
|
||||
L3 checksum offload = Y
|
||||
|
@ -22,6 +22,7 @@ RSS hash = Y
|
||||
RSS key update = Y
|
||||
RSS reta update = Y
|
||||
Inner RSS = Y
|
||||
Inline protocol = Y
|
||||
Jumbo frame = Y
|
||||
Scattered Rx = Y
|
||||
L3 checksum offload = Y
|
||||
|
@ -98,6 +98,8 @@ New Features
|
||||
|
||||
* Added rte_flow support for dual VLAN insert and strip actions.
|
||||
* Added rte_tm support.
|
||||
* Added support for Inline IPsec for CN9K event mode and CN10K
|
||||
poll mode and event mode.
|
||||
|
||||
* **Updated Marvell cnxk crypto PMD.**
|
||||
|
||||
|
@ -123,7 +123,9 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
|
||||
uint16_t port_id, const struct rte_event *ev,
|
||||
uint8_t custom_flowid)
|
||||
{
|
||||
struct roc_nix *nix = &cnxk_eth_dev->nix;
|
||||
struct roc_nix_rq *rq;
|
||||
int rc;
|
||||
|
||||
rq = &cnxk_eth_dev->rqs[rq_id];
|
||||
rq->sso_ena = 1;
|
||||
@ -140,7 +142,24 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
|
||||
rq->tag_mask |= ev->flow_id;
|
||||
}
|
||||
|
||||
return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
|
||||
rc = roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (rq_id == 0 && roc_nix_inl_inb_is_enabled(nix)) {
|
||||
uint32_t sec_tag_const;
|
||||
|
||||
/* IPSec tag const is 8-bit left shifted value of tag_mask
|
||||
* as it applies to bit 32:8 of tag only.
|
||||
*/
|
||||
sec_tag_const = rq->tag_mask >> 8;
|
||||
rc = roc_nix_inl_inb_tag_update(nix, sec_tag_const,
|
||||
ev->sched_type);
|
||||
if (rc)
|
||||
plt_err("Failed to set tag conf for ipsec, rc=%d", rc);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
@ -186,6 +205,7 @@ cnxk_sso_rx_adapter_queue_add(
|
||||
rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
|
||||
rxq_sp->qconf.mp->pool_id, true,
|
||||
dev->force_ena_bp);
|
||||
cnxk_eth_dev->nb_rxq_sso++;
|
||||
}
|
||||
|
||||
if (rc < 0) {
|
||||
@ -196,6 +216,14 @@ cnxk_sso_rx_adapter_queue_add(
|
||||
|
||||
dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags;
|
||||
|
||||
/* Switch to use PF/VF's NIX LF instead of inline device for inbound
|
||||
* when all the RQ's are switched to event dev mode. We do this only
|
||||
* when using inline device is not forced by dev args.
|
||||
*/
|
||||
if (!cnxk_eth_dev->inb.force_inl_dev &&
|
||||
cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq)
|
||||
cnxk_nix_inb_mode_set(cnxk_eth_dev, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -220,12 +248,18 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
|
||||
rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
|
||||
rxq_sp->qconf.mp->pool_id, false,
|
||||
dev->force_ena_bp);
|
||||
cnxk_eth_dev->nb_rxq_sso--;
|
||||
}
|
||||
|
||||
if (rc < 0)
|
||||
plt_err("Failed to clear Rx adapter config port=%d, q=%d",
|
||||
eth_dev->data->port_id, rx_queue_id);
|
||||
|
||||
/* Removing RQ from Rx adapter implies need to use
|
||||
* inline device for CQ/Poll mode.
|
||||
*/
|
||||
cnxk_nix_inb_mode_set(cnxk_eth_dev, true);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
|
||||
if (!dev->ptype_disable)
|
||||
flags |= NIX_RX_OFFLOAD_PTYPE_F;
|
||||
|
||||
if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
|
||||
flags |= NIX_RX_OFFLOAD_SECURITY_F;
|
||||
|
||||
return flags;
|
||||
}
|
||||
|
||||
@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
|
||||
if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
|
||||
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
|
||||
|
||||
if (conf & DEV_TX_OFFLOAD_SECURITY)
|
||||
flags |= NIX_TX_OFFLOAD_SECURITY_F;
|
||||
|
||||
return flags;
|
||||
}
|
||||
|
||||
@ -181,8 +187,11 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
|
||||
const struct rte_eth_txconf *tx_conf)
|
||||
{
|
||||
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
|
||||
struct roc_nix *nix = &dev->nix;
|
||||
struct roc_cpt_lf *inl_lf;
|
||||
struct cn10k_eth_txq *txq;
|
||||
struct roc_nix_sq *sq;
|
||||
uint16_t crypto_qid;
|
||||
int rc;
|
||||
|
||||
RTE_SET_USED(socket);
|
||||
@ -198,11 +207,24 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
|
||||
txq = eth_dev->data->tx_queues[qid];
|
||||
txq->fc_mem = sq->fc;
|
||||
/* Store lmt base in tx queue for easy access */
|
||||
txq->lmt_base = dev->nix.lmt_base;
|
||||
txq->lmt_base = nix->lmt_base;
|
||||
txq->io_addr = sq->io_addr;
|
||||
txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
|
||||
txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
|
||||
|
||||
/* Fetch CPT LF info for outbound if present */
|
||||
if (dev->outb.lf_base) {
|
||||
crypto_qid = qid % dev->outb.nb_crypto_qs;
|
||||
inl_lf = dev->outb.lf_base + crypto_qid;
|
||||
|
||||
txq->cpt_io_addr = inl_lf->io_addr;
|
||||
txq->cpt_fc = inl_lf->fc_addr;
|
||||
txq->cpt_desc = inl_lf->nb_desc * 0.7;
|
||||
txq->sa_base = (uint64_t)dev->outb.sa_base;
|
||||
txq->sa_base |= eth_dev->data->port_id;
|
||||
PLT_STATIC_ASSERT(ROC_NIX_INL_SA_BASE_ALIGN == BIT_ULL(16));
|
||||
}
|
||||
|
||||
nix_form_default_desc(dev, txq, qid);
|
||||
txq->lso_tun_fmt = dev->lso_tun_fmt;
|
||||
return 0;
|
||||
@ -215,6 +237,7 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
|
||||
struct rte_mempool *mp)
|
||||
{
|
||||
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
|
||||
struct cnxk_eth_rxq_sp *rxq_sp;
|
||||
struct cn10k_eth_rxq *rxq;
|
||||
struct roc_nix_rq *rq;
|
||||
struct roc_nix_cq *cq;
|
||||
@ -250,6 +273,15 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
|
||||
rxq->data_off = rq->first_skip;
|
||||
rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
|
||||
|
||||
/* Setup security related info */
|
||||
if (dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F) {
|
||||
rxq->lmt_base = dev->nix.lmt_base;
|
||||
rxq->sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix,
|
||||
dev->inb.inl_dev);
|
||||
}
|
||||
rxq_sp = cnxk_eth_rxq_to_sp(rxq);
|
||||
rxq->aura_handle = rxq_sp->qconf.mp->pool_id;
|
||||
|
||||
/* Lookup mem */
|
||||
rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
|
||||
return 0;
|
||||
@ -500,6 +532,8 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
|
||||
nix_eth_dev_ops_override();
|
||||
npc_flow_ops_override();
|
||||
|
||||
cn10k_eth_sec_ops_override();
|
||||
|
||||
/* Common probe */
|
||||
rc = cnxk_nix_probe(pci_drv, pci_dev);
|
||||
if (rc)
|
||||
|
@ -5,6 +5,7 @@
|
||||
#define __CN10K_ETHDEV_H__
|
||||
|
||||
#include <cnxk_ethdev.h>
|
||||
#include <cnxk_security.h>
|
||||
|
||||
struct cn10k_eth_txq {
|
||||
uint64_t send_hdr_w0;
|
||||
@ -15,6 +16,10 @@ struct cn10k_eth_txq {
|
||||
rte_iova_t io_addr;
|
||||
uint16_t sqes_per_sqb_log2;
|
||||
int16_t nb_sqb_bufs_adj;
|
||||
rte_iova_t cpt_io_addr;
|
||||
uint64_t sa_base;
|
||||
uint64_t *cpt_fc;
|
||||
uint16_t cpt_desc;
|
||||
uint64_t cmd[4];
|
||||
uint64_t lso_tun_fmt;
|
||||
} __plt_cache_aligned;
|
||||
@ -30,12 +35,50 @@ struct cn10k_eth_rxq {
|
||||
uint32_t qmask;
|
||||
uint32_t available;
|
||||
uint16_t data_off;
|
||||
uint64_t sa_base;
|
||||
uint64_t lmt_base;
|
||||
uint64_t aura_handle;
|
||||
uint16_t rq;
|
||||
struct cnxk_timesync_info *tstamp;
|
||||
} __plt_cache_aligned;
|
||||
|
||||
/* Private data in sw rsvd area of struct roc_ot_ipsec_inb_sa */
|
||||
struct cn10k_inb_priv_data {
|
||||
void *userdata;
|
||||
struct cnxk_eth_sec_sess *eth_sec;
|
||||
};
|
||||
|
||||
/* Private data in sw rsvd area of struct roc_ot_ipsec_outb_sa */
|
||||
struct cn10k_outb_priv_data {
|
||||
void *userdata;
|
||||
/* Rlen computation data */
|
||||
struct cnxk_ipsec_outb_rlens rlens;
|
||||
/* Back pinter to eth sec session */
|
||||
struct cnxk_eth_sec_sess *eth_sec;
|
||||
/* SA index */
|
||||
uint32_t sa_idx;
|
||||
};
|
||||
|
||||
struct cn10k_sec_sess_priv {
|
||||
union {
|
||||
struct {
|
||||
uint32_t sa_idx;
|
||||
uint8_t inb_sa : 1;
|
||||
uint8_t rsvd1 : 2;
|
||||
uint8_t roundup_byte : 5;
|
||||
uint8_t roundup_len;
|
||||
uint16_t partial_len;
|
||||
};
|
||||
|
||||
uint64_t u64;
|
||||
};
|
||||
} __rte_packed;
|
||||
|
||||
/* Rx and Tx routines */
|
||||
void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
|
||||
void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
|
||||
|
||||
/* Security context setup */
|
||||
void cn10k_eth_sec_ops_override(void);
|
||||
|
||||
#endif /* __CN10K_ETHDEV_H__ */
|
||||
|
426
drivers/net/cnxk/cn10k_ethdev_sec.c
Normal file
426
drivers/net/cnxk/cn10k_ethdev_sec.c
Normal file
@ -0,0 +1,426 @@
|
||||
/* SPDX-License-Identifier: BSD-3-Clause
|
||||
* Copyright(C) 2021 Marvell.
|
||||
*/
|
||||
|
||||
#include <rte_cryptodev.h>
|
||||
#include <rte_eventdev.h>
|
||||
#include <rte_security.h>
|
||||
#include <rte_security_driver.h>
|
||||
|
||||
#include <cn10k_ethdev.h>
|
||||
#include <cnxk_security.h>
|
||||
|
||||
static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
|
||||
{ /* AES GCM */
|
||||
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
|
||||
{.sym = {
|
||||
.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
|
||||
{.aead = {
|
||||
.algo = RTE_CRYPTO_AEAD_AES_GCM,
|
||||
.block_size = 16,
|
||||
.key_size = {
|
||||
.min = 16,
|
||||
.max = 32,
|
||||
.increment = 8
|
||||
},
|
||||
.digest_size = {
|
||||
.min = 16,
|
||||
.max = 16,
|
||||
.increment = 0
|
||||
},
|
||||
.aad_size = {
|
||||
.min = 8,
|
||||
.max = 12,
|
||||
.increment = 4
|
||||
},
|
||||
.iv_size = {
|
||||
.min = 12,
|
||||
.max = 12,
|
||||
.increment = 0
|
||||
}
|
||||
}, }
|
||||
}, }
|
||||
},
|
||||
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
|
||||
};
|
||||
|
||||
static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
|
||||
{ /* IPsec Inline Protocol ESP Tunnel Ingress */
|
||||
.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
|
||||
.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
|
||||
.ipsec = {
|
||||
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
|
||||
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
|
||||
.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
|
||||
.options = { 0 }
|
||||
},
|
||||
.crypto_capabilities = cn10k_eth_sec_crypto_caps,
|
||||
.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
|
||||
},
|
||||
{ /* IPsec Inline Protocol ESP Tunnel Egress */
|
||||
.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
|
||||
.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
|
||||
.ipsec = {
|
||||
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
|
||||
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
|
||||
.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
|
||||
.options = { 0 }
|
||||
},
|
||||
.crypto_capabilities = cn10k_eth_sec_crypto_caps,
|
||||
.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
|
||||
},
|
||||
{
|
||||
.action = RTE_SECURITY_ACTION_TYPE_NONE
|
||||
}
|
||||
};
|
||||
|
||||
static void
|
||||
cn10k_eth_sec_sso_work_cb(uint64_t *gw, void *args)
|
||||
{
|
||||
struct rte_eth_event_ipsec_desc desc;
|
||||
struct cn10k_sec_sess_priv sess_priv;
|
||||
struct cn10k_outb_priv_data *priv;
|
||||
struct roc_ot_ipsec_outb_sa *sa;
|
||||
struct cpt_cn10k_res_s *res;
|
||||
struct rte_eth_dev *eth_dev;
|
||||
struct cnxk_eth_dev *dev;
|
||||
uint16_t dlen_adj, rlen;
|
||||
struct rte_mbuf *mbuf;
|
||||
uintptr_t sa_base;
|
||||
uintptr_t nixtx;
|
||||
uint8_t port;
|
||||
|
||||
RTE_SET_USED(args);
|
||||
|
||||
switch ((gw[0] >> 28) & 0xF) {
|
||||
case RTE_EVENT_TYPE_ETHDEV:
|
||||
/* Event from inbound inline dev due to IPSEC packet bad L4 */
|
||||
mbuf = (struct rte_mbuf *)(gw[1] - sizeof(struct rte_mbuf));
|
||||
plt_nix_dbg("Received mbuf %p from inline dev inbound", mbuf);
|
||||
rte_pktmbuf_free(mbuf);
|
||||
return;
|
||||
case RTE_EVENT_TYPE_CPU:
|
||||
/* Check for subtype */
|
||||
if (((gw[0] >> 20) & 0xFF) == CNXK_ETHDEV_SEC_OUTB_EV_SUB) {
|
||||
/* Event from outbound inline error */
|
||||
mbuf = (struct rte_mbuf *)gw[1];
|
||||
break;
|
||||
}
|
||||
/* Fall through */
|
||||
default:
|
||||
plt_err("Unknown event gw[0] = 0x%016lx, gw[1] = 0x%016lx",
|
||||
gw[0], gw[1]);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Get ethdev port from tag */
|
||||
port = gw[0] & 0xFF;
|
||||
eth_dev = &rte_eth_devices[port];
|
||||
dev = cnxk_eth_pmd_priv(eth_dev);
|
||||
|
||||
sess_priv.u64 = *rte_security_dynfield(mbuf);
|
||||
/* Calculate dlen adj */
|
||||
dlen_adj = mbuf->pkt_len - mbuf->l2_len;
|
||||
rlen = (dlen_adj + sess_priv.roundup_len) +
|
||||
(sess_priv.roundup_byte - 1);
|
||||
rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
|
||||
rlen += sess_priv.partial_len;
|
||||
dlen_adj = rlen - dlen_adj;
|
||||
|
||||
/* Find the res area residing on next cacheline after end of data */
|
||||
nixtx = rte_pktmbuf_mtod(mbuf, uintptr_t) + mbuf->pkt_len + dlen_adj;
|
||||
nixtx += BIT_ULL(7);
|
||||
nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
|
||||
res = (struct cpt_cn10k_res_s *)nixtx;
|
||||
|
||||
plt_nix_dbg("Outbound error, mbuf %p, sa_index %u, compcode %x uc %x",
|
||||
mbuf, sess_priv.sa_idx, res->compcode, res->uc_compcode);
|
||||
|
||||
sess_priv.u64 = *rte_security_dynfield(mbuf);
|
||||
|
||||
sa_base = dev->outb.sa_base;
|
||||
sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
|
||||
priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(sa);
|
||||
|
||||
memset(&desc, 0, sizeof(desc));
|
||||
|
||||
switch (res->uc_compcode) {
|
||||
case ROC_IE_OT_UCC_ERR_SA_OVERFLOW:
|
||||
desc.subtype = RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW;
|
||||
break;
|
||||
default:
|
||||
plt_warn("Outbound error, mbuf %p, sa_index %u, "
|
||||
"compcode %x uc %x", mbuf, sess_priv.sa_idx,
|
||||
res->compcode, res->uc_compcode);
|
||||
desc.subtype = RTE_ETH_EVENT_IPSEC_UNKNOWN;
|
||||
break;
|
||||
}
|
||||
|
||||
desc.metadata = (uint64_t)priv->userdata;
|
||||
rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_IPSEC, &desc);
|
||||
rte_pktmbuf_free(mbuf);
|
||||
}
|
||||
|
||||
static int
|
||||
cn10k_eth_sec_session_create(void *device,
|
||||
struct rte_security_session_conf *conf,
|
||||
struct rte_security_session *sess,
|
||||
struct rte_mempool *mempool)
|
||||
{
|
||||
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
|
||||
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
|
||||
struct rte_security_ipsec_xform *ipsec;
|
||||
struct cn10k_sec_sess_priv sess_priv;
|
||||
struct rte_crypto_sym_xform *crypto;
|
||||
struct cnxk_eth_sec_sess *eth_sec;
|
||||
bool inbound, inl_dev;
|
||||
int rc = 0;
|
||||
|
||||
if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
|
||||
return -ENOTSUP;
|
||||
|
||||
if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
|
||||
return -ENOTSUP;
|
||||
|
||||
if (rte_security_dynfield_register() < 0)
|
||||
return -ENOTSUP;
|
||||
|
||||
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
|
||||
roc_nix_inl_cb_register(cn10k_eth_sec_sso_work_cb, NULL);
|
||||
|
||||
ipsec = &conf->ipsec;
|
||||
crypto = conf->crypto_xform;
|
||||
inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
|
||||
inl_dev = !!dev->inb.inl_dev;
|
||||
|
||||
/* Search if a session already exits */
|
||||
if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
|
||||
plt_err("%s SA with SPI %u already in use",
|
||||
inbound ? "Inbound" : "Outbound", ipsec->spi);
|
||||
return -EEXIST;
|
||||
}
|
||||
|
||||
if (rte_mempool_get(mempool, (void **)ð_sec)) {
|
||||
plt_err("Could not allocate security session private data");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
|
||||
sess_priv.u64 = 0;
|
||||
|
||||
/* Acquire lock on inline dev for inbound */
|
||||
if (inbound && inl_dev)
|
||||
roc_nix_inl_dev_lock();
|
||||
|
||||
if (inbound) {
|
||||
struct cn10k_inb_priv_data *inb_priv;
|
||||
struct roc_ot_ipsec_inb_sa *inb_sa;
|
||||
uintptr_t sa;
|
||||
|
||||
PLT_STATIC_ASSERT(sizeof(struct cn10k_inb_priv_data) <
|
||||
ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD);
|
||||
|
||||
/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE */
|
||||
sa = roc_nix_inl_inb_sa_get(&dev->nix, inl_dev, ipsec->spi);
|
||||
if (!sa && dev->inb.inl_dev) {
|
||||
plt_err("Failed to create ingress sa, inline dev "
|
||||
"not found or spi not in range");
|
||||
rc = -ENOTSUP;
|
||||
goto mempool_put;
|
||||
} else if (!sa) {
|
||||
plt_err("Failed to create ingress sa");
|
||||
rc = -EFAULT;
|
||||
goto mempool_put;
|
||||
}
|
||||
|
||||
inb_sa = (struct roc_ot_ipsec_inb_sa *)sa;
|
||||
|
||||
/* Check if SA is already in use */
|
||||
if (inb_sa->w2.s.valid) {
|
||||
plt_err("Inbound SA with SPI %u already in use",
|
||||
ipsec->spi);
|
||||
rc = -EBUSY;
|
||||
goto mempool_put;
|
||||
}
|
||||
|
||||
memset(inb_sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
|
||||
|
||||
/* Fill inbound sa params */
|
||||
rc = cnxk_ot_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
|
||||
if (rc) {
|
||||
plt_err("Failed to init inbound sa, rc=%d", rc);
|
||||
goto mempool_put;
|
||||
}
|
||||
|
||||
inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
|
||||
/* Back pointer to get eth_sec */
|
||||
inb_priv->eth_sec = eth_sec;
|
||||
/* Save userdata in inb private area */
|
||||
inb_priv->userdata = conf->userdata;
|
||||
|
||||
/* Save SA index/SPI in cookie for now */
|
||||
inb_sa->w1.s.cookie = rte_cpu_to_be_32(ipsec->spi);
|
||||
|
||||
/* Prepare session priv */
|
||||
sess_priv.inb_sa = 1;
|
||||
sess_priv.sa_idx = ipsec->spi;
|
||||
|
||||
/* Pointer from eth_sec -> inb_sa */
|
||||
eth_sec->sa = inb_sa;
|
||||
eth_sec->sess = sess;
|
||||
eth_sec->sa_idx = ipsec->spi;
|
||||
eth_sec->spi = ipsec->spi;
|
||||
eth_sec->inl_dev = !!dev->inb.inl_dev;
|
||||
eth_sec->inb = true;
|
||||
|
||||
TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
|
||||
dev->inb.nb_sess++;
|
||||
} else {
|
||||
struct cn10k_outb_priv_data *outb_priv;
|
||||
struct roc_ot_ipsec_outb_sa *outb_sa;
|
||||
struct cnxk_ipsec_outb_rlens *rlens;
|
||||
uint64_t sa_base = dev->outb.sa_base;
|
||||
uint32_t sa_idx;
|
||||
|
||||
PLT_STATIC_ASSERT(sizeof(struct cn10k_outb_priv_data) <
|
||||
ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD);
|
||||
|
||||
/* Alloc an sa index */
|
||||
rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
|
||||
if (rc)
|
||||
goto mempool_put;
|
||||
|
||||
outb_sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sa_idx);
|
||||
outb_priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(outb_sa);
|
||||
rlens = &outb_priv->rlens;
|
||||
|
||||
memset(outb_sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
|
||||
|
||||
/* Fill outbound sa params */
|
||||
rc = cnxk_ot_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
|
||||
if (rc) {
|
||||
plt_err("Failed to init outbound sa, rc=%d", rc);
|
||||
rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
|
||||
goto mempool_put;
|
||||
}
|
||||
|
||||
/* Save userdata */
|
||||
outb_priv->userdata = conf->userdata;
|
||||
outb_priv->sa_idx = sa_idx;
|
||||
outb_priv->eth_sec = eth_sec;
|
||||
|
||||
/* Save rlen info */
|
||||
cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
|
||||
|
||||
/* Prepare session priv */
|
||||
sess_priv.sa_idx = outb_priv->sa_idx;
|
||||
sess_priv.roundup_byte = rlens->roundup_byte;
|
||||
sess_priv.roundup_len = rlens->roundup_len;
|
||||
sess_priv.partial_len = rlens->partial_len;
|
||||
|
||||
/* Pointer from eth_sec -> outb_sa */
|
||||
eth_sec->sa = outb_sa;
|
||||
eth_sec->sess = sess;
|
||||
eth_sec->sa_idx = sa_idx;
|
||||
eth_sec->spi = ipsec->spi;
|
||||
|
||||
TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
|
||||
dev->outb.nb_sess++;
|
||||
}
|
||||
|
||||
/* Sync session in context cache */
|
||||
roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
|
||||
ROC_NIX_INL_SA_OP_RELOAD);
|
||||
|
||||
if (inbound && inl_dev)
|
||||
roc_nix_inl_dev_unlock();
|
||||
|
||||
plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u inl_dev=%u",
|
||||
inbound ? "inbound" : "outbound", eth_sec->spi,
|
||||
eth_sec->sa_idx, eth_sec->inl_dev);
|
||||
/*
|
||||
* Update fast path info in priv area.
|
||||
*/
|
||||
set_sec_session_private_data(sess, (void *)sess_priv.u64);
|
||||
|
||||
return 0;
|
||||
mempool_put:
|
||||
if (inbound && inl_dev)
|
||||
roc_nix_inl_dev_unlock();
|
||||
rte_mempool_put(mempool, eth_sec);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
|
||||
{
|
||||
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
|
||||
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
|
||||
struct roc_ot_ipsec_inb_sa *inb_sa;
|
||||
struct roc_ot_ipsec_outb_sa *outb_sa;
|
||||
struct cnxk_eth_sec_sess *eth_sec;
|
||||
struct rte_mempool *mp;
|
||||
|
||||
eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
|
||||
if (!eth_sec)
|
||||
return -ENOENT;
|
||||
|
||||
if (eth_sec->inl_dev)
|
||||
roc_nix_inl_dev_lock();
|
||||
|
||||
if (eth_sec->inb) {
|
||||
inb_sa = eth_sec->sa;
|
||||
/* Disable SA */
|
||||
inb_sa->w2.s.valid = 0;
|
||||
|
||||
TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
|
||||
dev->inb.nb_sess--;
|
||||
} else {
|
||||
outb_sa = eth_sec->sa;
|
||||
/* Disable SA */
|
||||
outb_sa->w2.s.valid = 0;
|
||||
|
||||
/* Release Outbound SA index */
|
||||
cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
|
||||
TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
|
||||
dev->outb.nb_sess--;
|
||||
}
|
||||
|
||||
/* Sync session in context cache */
|
||||
roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
|
||||
ROC_NIX_INL_SA_OP_RELOAD);
|
||||
|
||||
if (eth_sec->inl_dev)
|
||||
roc_nix_inl_dev_unlock();
|
||||
|
||||
plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u, inl_dev=%u",
|
||||
eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
|
||||
eth_sec->sa_idx, eth_sec->inl_dev);
|
||||
|
||||
/* Put eth_sec object back to pool */
|
||||
mp = rte_mempool_from_obj(eth_sec);
|
||||
set_sec_session_private_data(sess, NULL);
|
||||
rte_mempool_put(mp, eth_sec);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct rte_security_capability *
|
||||
cn10k_eth_sec_capabilities_get(void *device __rte_unused)
|
||||
{
|
||||
return cn10k_eth_sec_capabilities;
|
||||
}
|
||||
|
||||
void
|
||||
cn10k_eth_sec_ops_override(void)
|
||||
{
|
||||
static int init_once;
|
||||
|
||||
if (init_once)
|
||||
return;
|
||||
init_once = 1;
|
||||
|
||||
/* Update platform specific ops */
|
||||
cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create;
|
||||
cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy;
|
||||
cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get;
|
||||
}
|
@ -16,6 +16,7 @@
|
||||
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
|
||||
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(4)
|
||||
#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(5)
|
||||
#define NIX_RX_OFFLOAD_SECURITY_F BIT(6)
|
||||
|
||||
/* Flags to control cqe_to_mbuf conversion function.
|
||||
* Defining it from backwards to denote its been
|
||||
|
@ -13,6 +13,7 @@
|
||||
#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
|
||||
#define NIX_TX_OFFLOAD_TSO_F BIT(4)
|
||||
#define NIX_TX_OFFLOAD_TSTAMP_F BIT(5)
|
||||
#define NIX_TX_OFFLOAD_SECURITY_F BIT(6)
|
||||
|
||||
/* Flags to control xmit_prepare function.
|
||||
* Defining it from backwards to denote its been
|
||||
|
@ -38,6 +38,7 @@ sources += files(
|
||||
# CN10K
|
||||
sources += files(
|
||||
'cn10k_ethdev.c',
|
||||
'cn10k_ethdev_sec.c',
|
||||
'cn10k_rte_flow.c',
|
||||
'cn10k_rx.c',
|
||||
'cn10k_rx_mseg.c',
|
||||
|
@ -49,6 +49,8 @@
|
||||
'SVendor': None, 'SDevice': None}
|
||||
cnxk_bphy_cgx = {'Class': '08', 'Vendor': '177d', 'Device': 'a059,a060',
|
||||
'SVendor': None, 'SDevice': None}
|
||||
cnxk_inl_dev = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f0,a0f1',
|
||||
'SVendor': None, 'SDevice': None}
|
||||
|
||||
intel_dlb = {'Class': '0b', 'Vendor': '8086', 'Device': '270b,2710,2714',
|
||||
'SVendor': None, 'SDevice': None}
|
||||
@ -73,9 +75,9 @@
|
||||
mempool_devices = [cavium_fpa, octeontx2_npa]
|
||||
compress_devices = [cavium_zip]
|
||||
regex_devices = [octeontx2_ree]
|
||||
misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
|
||||
intel_ntb_skx, intel_ntb_icx,
|
||||
octeontx2_dma]
|
||||
misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ioat_bdw,
|
||||
intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx,
|
||||
intel_ntb_icx, octeontx2_dma]
|
||||
|
||||
# global dict ethernet devices present. Dictionary indexed by PCI address.
|
||||
# Each device within this is itself a dictionary of device properties
|
||||
|
Loading…
Reference in New Issue
Block a user