ethdev: introduce Tx queue offloads API

Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This commit is contained in:
Shahaf Shuler 2017-10-04 11:17:59 +03:00 committed by Ferruh Yigit
parent ce17eddefc
commit cba7f53b71
3 changed files with 109 additions and 10 deletions

View File

@ -131,7 +131,8 @@ Lock-free Tx queue
If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
* **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
* **[related] API**: ``rte_eth_tx_burst()``.
@ -220,11 +221,12 @@ TSO
Supports TCP Segmentation Offloading.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
* **[uses] rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
* **[uses] mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
* **[implements] datapath**: ``TSO functionality``.
* **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
.. _nic_features_promiscuous_mode:
@ -510,10 +512,11 @@ VLAN offload
Supports VLAN offload to hardware.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
@ -526,11 +529,12 @@ QinQ offload
Supports QinQ (queue in queue) offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
``mbuf.vlan_tci_outer``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
.. _nic_features_l3_checksum_offload:
@ -541,13 +545,14 @@ L3 checksum offload
Supports L3 checksum offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
.. _nic_features_l4_checksum_offload:
@ -558,6 +563,7 @@ L4 checksum offload
Supports L4 checksum offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@ -565,7 +571,7 @@ Supports L4 checksum offload.
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
.. _nic_features_macsec_offload:
@ -576,9 +582,10 @@ MACsec offload
Supports MACsec.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
.. _nic_features_inner_l3_checksum:
@ -589,6 +596,7 @@ Inner L3 checksum
Supports inner packet L3 checksum.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
@ -596,7 +604,7 @@ Supports inner packet L3 checksum.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
.. _nic_features_inner_l4_checksum:

View File

@ -1214,6 +1214,50 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
return ret;
}
/**
* A conversion function from txq_flags API.
*/
static void
rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
{
uint64_t offloads = 0;
if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
*tx_offloads = offloads;
}
/**
* A conversion function from offloads API.
*/
static void
rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
{
uint32_t flags = 0;
if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
*txq_flags = flags;
}
int
rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
@ -1221,6 +1265,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
{
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf local_conf;
void **txq;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@ -1265,8 +1310,23 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
if (tx_conf == NULL)
tx_conf = &dev_info.default_txconf;
/*
* Convert between the offloads API to enable PMDs to support
* only one of them.
*/
local_conf = *tx_conf;
if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE) {
rte_eth_convert_txq_offloads(tx_conf->offloads,
&local_conf.txq_flags);
/* Keep the ignore flag. */
local_conf.txq_flags |= ETH_TXQ_FLAGS_IGNORE;
} else {
rte_eth_convert_txq_flags(tx_conf->txq_flags,
&local_conf.offloads);
}
return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
socket_id, tx_conf);
socket_id, &local_conf);
}
void

View File

@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
*/
struct rte_eth_txmode {
enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
/**
* Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
* Only offloads set on tx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
uint64_t offloads;
/* For i40e specifically */
uint16_t pvid;
@ -733,6 +739,15 @@ struct rte_eth_rxconf {
#define ETH_TXQ_FLAGS_NOXSUMS \
(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
ETH_TXQ_FLAGS_NOXSUMTCP)
/**
* When set the txq_flags should be ignored,
* instead per-queue Tx offloads will be set on offloads field
* located on rte_eth_txq_conf struct.
* This flag is temporary till the rte_eth_txq_conf.txq_flags
* API will be deprecated.
*/
#define ETH_TXQ_FLAGS_IGNORE 0x8000
/**
* A structure used to configure a TX ring of an Ethernet port.
*/
@ -744,6 +759,12 @@ struct rte_eth_txconf {
uint32_t txq_flags; /**< Set flags for the Tx queue */
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
* Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
* Only offloads set on tx_queue_offload_capa or tx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
uint64_t offloads;
};
/**
@ -968,6 +989,8 @@ struct rte_eth_conf {
/**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* tx queue without SW lock.
*/
#define DEV_TX_OFFLOAD_MULTI_SEGS 0x00008000
/**< Device supports multi segment send. */
struct rte_pci_device;
@ -990,9 +1013,12 @@ struct rte_eth_dev_info {
uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
uint64_t rx_offload_capa;
/**< Device per port RX offload capabilities. */
uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
uint64_t tx_offload_capa;
/**< Device per port TX offload capabilities. */
uint64_t rx_queue_offload_capa;
/**< Device per queue RX offload capabilities. */
uint64_t tx_queue_offload_capa;
/**< Device per queue TX offload capabilities. */
uint16_t reta_size;
/**< Device redirection table size, the total number of entries. */
uint8_t hash_key_size; /**< Hash key size in bytes */
@ -2027,6 +2053,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
* - The *txq_flags* member contains flags to pass to the TX queue setup
* function to configure the behavior of the TX queue. This should be set
* to 0 if no special configuration is required.
* This API is obsolete and will be deprecated. Applications
* should set it to ETH_TXQ_FLAGS_IGNORE and use
* the offloads field below.
* - The *offloads* member contains Tx offloads to be enabled.
* Offloads which are not set cannot be used on the datapath.
*
* Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
* the transmit function to use default values.