ethdev: remove old offload API

In DPDK 17.11, the ethdev offloads API has changed:
	commit cba7f53b717d ("ethdev: introduce Tx queue offloads API")
	commit ce17eddefc20 ("ethdev: introduce Rx queue offloads API")
The new API is documented in the programmer's guide:
	http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#hardware-offload

For reminder, the main concepts in the new API were:
	- All offloads are disabled by default
	- Distinction between per port and per queue offloads.

The transition bits are now removed:
	- Translation of the old API in ethdev
	- rte_eth_conf.rxmode.ignore_offload_bitfield
	- ETH_TXQ_FLAGS_IGNORE

The old API bits are now removed:
	- Rx per-port rte_eth_conf.rxmode.[bit-fields]
	- Tx per-queue rte_eth_txconf.txq_flags
	- ETH_TXQ_FLAGS_NO*

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
This commit is contained in:
Ferruh Yigit 2018-07-02 23:27:50 +02:00
parent adeddc9c18
commit ab3ce1e0c1
60 changed files with 13 additions and 322 deletions

View File

@ -223,7 +223,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
.ignore_offload_bitfield = 1,
},
.rx_adv_conf = {
.rss_conf = {

View File

@ -334,7 +334,6 @@ lcoreid_t latencystats_lcore_id = -1;
struct rte_eth_rxmode rx_mode = {
.max_rx_pkt_len = ETHER_MAX_LEN, /**< Default maximum frame length. */
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
.ignore_offload_bitfield = 1,
};
struct rte_eth_txmode tx_mode = {
@ -1645,8 +1644,6 @@ start_port(portid_t pid)
port->need_reconfig_queues = 0;
/* setup tx queues */
for (qi = 0; qi < nb_txq; qi++) {
port->tx_conf[qi].txq_flags =
ETH_TXQ_FLAGS_IGNORE;
if ((numa_support) &&
(txring_numa[pi] != NUMA_NO_CONFIG))
diag = rte_eth_tx_queue_setup(pi, qi,

View File

@ -577,7 +577,6 @@ Supports L4 checksum offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] user config**: ``dev_conf.rxmode.hw_ip_checksum``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.

View File

@ -328,11 +328,6 @@ A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setu
is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
In such cases it is not required to set other flags in ``txq_flags``.
For an application to use the Rx offloads API it should set the ``ignore_offload_bitfield`` bit in the ``rte_eth_rxmode`` struct.
In such cases it is not required to set other bitfield offloads in the ``rxmode`` struct.
Poll Mode Driver API
--------------------

View File

@ -58,15 +58,6 @@ Deprecation Notices
experimental API ``rte_pktmbuf_attach_extbuf()`` is used. Removal of the macro
is to fix this semantic inconsistency.
* ethdev: a new Tx and Rx offload API was introduced on 17.11.
In the new API, offloads are divided into per-port and per-queue offloads.
Offloads are disabled by default and enabled per application request.
In later releases the old offloading API will be deprecated, which will include:
- removal of ``ETH_TXQ_FLAGS_NO*`` flags.
- removal of ``txq_flags`` field from ``rte_eth_txconf`` struct.
- removal of the offloads bit-field from ``rte_eth_rxmode`` struct.
* ethdev: In v18.11 ``DEV_RX_OFFLOAD_CRC_STRIP`` offload flag will be removed, default
behavior without any flag will be changed to CRC strip.
To keep CRC ``DEV_RX_OFFLOAD_KEEP_CRC`` flag is required.

View File

@ -139,7 +139,6 @@ application is shown below:
struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -216,7 +215,6 @@ The Ethernet port is configured with default settings using the
struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {

View File

@ -387,8 +387,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = AXGBE_TX_FREE_THRESH,
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
}

View File

@ -371,10 +371,8 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (txq->nb_desc % txq->free_thresh != 0)
txq->vector_disable = 1;
if ((tx_conf->txq_flags & (uint32_t)ETH_TXQ_FLAGS_NOOFFLOADS) !=
ETH_TXQ_FLAGS_NOOFFLOADS) {
if (tx_conf->offloads != 0)
txq->vector_disable = 1;
}
/* Allocate TX ring hardware descriptors */
tsize = txq->nb_desc * sizeof(struct axgbe_tx_desc);

View File

@ -1420,7 +1420,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
/*
* Check if VLAN present only.
* Do not check whether L3/L4 rx checksum done by NIC or not,
* That can be found from rte_eth_rxmode.hw_ip_checksum flag
* That can be found from rte_eth_rxmode.offloads flag
*/
pkt_flags = (rx_status & IXGBE_RXD_STAT_VP) ? vlan_flags : 0;

View File

@ -1439,9 +1439,9 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if ((uint32_t)mtu > ETHER_MAX_LEN)
dev->data->dev_conf.rxmode.jumbo_frame = 1;
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.jumbo_frame = 0;
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
/* update max frame size */
dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;

View File

@ -2537,9 +2537,9 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
if (max_rx_pkt_len > ETHER_MAX_LEN)
dev->data->dev_conf.rxmode.jumbo_frame = 1;
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.jumbo_frame = 0;
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);

View File

@ -39,8 +39,6 @@ struct sfc_dp_tx_qcreate_info {
unsigned int max_fill_level;
/** Minimum number of unused Tx descriptors to do reap */
unsigned int free_thresh;
/** Transmit queue configuration flags */
unsigned int flags;
/** Offloads enabled on the transmit queue */
uint64_t offloads;
/** Tx queue size */

View File

@ -874,14 +874,12 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
/*
* The driver does not use it, but other PMDs update jumbo_frame
* The driver does not use it, but other PMDs update jumbo frame
* flag and max_rx_pkt_len when MTU is set.
*/
if (mtu > ETHER_MAX_LEN) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rxmode->jumbo_frame = 1;
}
dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
@ -1089,7 +1087,6 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
memset(qinfo, 0, sizeof(*qinfo));
qinfo->conf.txq_flags = txq_info->txq->flags;
qinfo->conf.offloads = txq_info->txq->offloads;
qinfo->conf.tx_free_thresh = txq_info->txq->free_thresh;
qinfo->conf.tx_deferred_start = txq_info->deferred_start;

View File

@ -1449,7 +1449,6 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
if (rte_eth_dev_must_keep_crc(rxmode->offloads)) {
sfc_warn(sa, "FCS stripping cannot be disabled - always on");
rxmode->offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
rxmode->hw_strip_crc = 1;
}
return rc;

View File

@ -171,7 +171,6 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
txq->free_thresh =
(tx_conf->tx_free_thresh) ? tx_conf->tx_free_thresh :
SFC_TX_DEFAULT_FREE_THRESH;
txq->flags = tx_conf->txq_flags;
txq->offloads = offloads;
rc = sfc_dma_alloc(sa, "txq", sw_index, EFX_TXQ_SIZE(txq_info->entries),
@ -182,7 +181,6 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
memset(&info, 0, sizeof(info));
info.max_fill_level = txq_max_fill_level;
info.free_thresh = txq->free_thresh;
info.flags = tx_conf->txq_flags;
info.offloads = offloads;
info.txq_entries = txq_info->entries;
info.dma_desc_size_max = encp->enc_tx_dma_desc_size_max;
@ -431,18 +429,10 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
if (rc != 0)
goto fail_ev_qstart;
/*
* The absence of ETH_TXQ_FLAGS_IGNORE is associated with a legacy
* application which expects that IPv4 checksum offload is enabled
* all the time as there is no legacy flag to turn off the offload.
*/
if ((txq->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ||
(~txq->flags & ETH_TXQ_FLAGS_IGNORE))
if (txq->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_IPV4;
if ((txq->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
((~txq->flags & ETH_TXQ_FLAGS_IGNORE) &&
(offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)))
if (txq->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_IPV4;
if ((txq->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
@ -453,16 +443,7 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
}
/*
* The absence of ETH_TXQ_FLAGS_IGNORE is associated with a legacy
* application. In turn, the absence of ETH_TXQ_FLAGS_NOXSUMTCP is
* associated specifically with a legacy application which expects
* both TCP checksum offload and TSO to be enabled because the legacy
* API does not provide a dedicated mechanism to control TSO.
*/
if ((txq->offloads & DEV_TX_OFFLOAD_TCP_TSO) ||
((~txq->flags & ETH_TXQ_FLAGS_IGNORE) &&
(~txq->flags & ETH_TXQ_FLAGS_NOXSUMTCP)))
if (txq->offloads & DEV_TX_OFFLOAD_TCP_TSO)
flags |= EFX_TXQ_FATSOV2;
rc = efx_tx_qcreate(sa->nic, sw_index, 0, &txq->mem,

View File

@ -58,7 +58,6 @@ struct sfc_txq {
struct sfc_dp_txq *dp;
efx_txq_t *common;
unsigned int free_thresh;
unsigned int flags;
uint64_t offloads;
};

View File

@ -2166,9 +2166,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
dev_info->default_txconf = (struct rte_eth_txconf) {
.txq_flags = ETH_TXQ_FLAGS_NOOFFLOADS
};
host_features = VTPCI_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |

View File

@ -1054,7 +1054,6 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev __rte_unused,
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
dev_info->default_txconf.txq_flags = ETH_TXQ_FLAGS_NOXSUMSCTP;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {

View File

@ -122,7 +122,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_NONE,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.rx_adv_conf = {
@ -177,7 +176,6 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
/* TX setup */
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid), &txq_conf);
@ -246,7 +244,6 @@ bond_port_init(struct rte_mempool *mbuf_pool)
/* TX setup */
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
retval = rte_eth_tx_queue_setup(BOND_PORT, 0, nb_txd,
rte_eth_dev_socket_id(BOND_PORT), &txq_conf);

View File

@ -80,7 +80,6 @@ static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
@ -141,7 +140,6 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf.txmode.offloads;
for (q = 0; q < txRings; q++) {
retval = rte_eth_tx_queue_setup(port, q, nb_txd,

View File

@ -99,7 +99,6 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
memset(&cfg_port, 0, sizeof(cfg_port));
cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
cfg_port.rxmode.ignore_offload_bitfield = 1;
for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
struct app_port *ptr_port = &app_cfg->ports[idx_port];
@ -142,7 +141,6 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
"rte_eth_rx_queue_setup failed"
);
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
if (rte_eth_tx_queue_setup(
idx_port, 0, nb_txd,
rte_eth_dev_socket_id(idx_port), &txconf) < 0)

View File

@ -267,7 +267,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
.rx_adv_conf = {
.rss_conf = {
@ -307,7 +306,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf_default.txmode.offloads;
/* Allocate and set up 1 TX queue per Ethernet port. */
for (q = 0; q < tx_rings; q++) {

View File

@ -88,7 +88,6 @@
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.rxmode = {
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -457,7 +456,6 @@ init_port(uint16_t port)
port, ret);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(port, 0, nb_txd,
rte_eth_dev_socket_id(port),

View File

@ -62,7 +62,6 @@ const char cb_port_delim[] = ":";
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
};
@ -222,7 +221,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf.txmode.offloads;
/* Allocate and set up 1 TX queue per Ethernet port. */
for (q = 0; q < tx_rings; q++) {

View File

@ -121,7 +121,6 @@ init_port(void)
struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {

View File

@ -140,7 +140,6 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_CRC_STRIP),
@ -973,7 +972,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
socket, txconf);

View File

@ -164,7 +164,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_CRC_STRIP),
@ -1121,7 +1120,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,

View File

@ -199,7 +199,6 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_CRC_STRIP,
.ignore_offload_bitfield = 1,
},
.rx_adv_conf = {
.rss_conf = {
@ -1592,7 +1591,6 @@ port_init(uint16_t portid)
printf("Setup txq=%u,%d,%d\n", lcore_id, tx_queueid, socket_id);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, tx_queueid, nb_txd,

View File

@ -109,7 +109,6 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_CRC_STRIP),
},
@ -764,7 +763,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
rte_lcore_to_socket_id(lcore_id), txconf);

View File

@ -95,7 +95,6 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.rxmode = {
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -607,7 +606,6 @@ init_port(uint16_t port)
"port%u (%d)\n", (unsigned)port, ret);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(port, 0, nb_txd,
rte_eth_dev_socket_id(port), &txq_conf);

View File

@ -211,7 +211,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_NONE,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -2371,7 +2370,6 @@ initialize_ports(struct l2fwd_crypto_options *options)
/* init one TX queue on each port */
fflush(stdout);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),

View File

@ -90,7 +90,6 @@ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -876,7 +875,6 @@ main(int argc, char **argv)
/* init one TX queue on each port */
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
fflush(stdout);
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,

View File

@ -81,7 +81,6 @@ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -671,7 +670,6 @@ main(int argc, char **argv)
/* init one TX queue on each port */
fflush(stdout);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),

View File

@ -82,7 +82,6 @@ static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -670,7 +669,6 @@ main(int argc, char **argv)
/* init one TX queue on each port */
fflush(stdout);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),

View File

@ -127,7 +127,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CRC_STRIP |
DEV_RX_OFFLOAD_CHECKSUM),
},
@ -1982,7 +1981,6 @@ main(int argc, char **argv)
rte_eth_dev_info_get(portid, &dev_info);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
socketid, txconf);

View File

@ -184,7 +184,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CRC_STRIP |
DEV_RX_OFFLOAD_CHECKSUM),
},
@ -1750,7 +1749,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
socketid, txconf);

View File

@ -161,7 +161,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CRC_STRIP |
DEV_RX_OFFLOAD_CHECKSUM),
},
@ -1009,7 +1008,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
socketid, txconf);

View File

@ -120,7 +120,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CRC_STRIP |
DEV_RX_OFFLOAD_CHECKSUM),
},
@ -909,7 +908,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
socketid, txconf);

View File

@ -79,7 +79,6 @@ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -653,7 +652,6 @@ main(int argc, char **argv)
/* init one TX queue logical core on each port */
fflush(stdout);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),

View File

@ -45,7 +45,6 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_CRC_STRIP),
},
@ -466,7 +465,6 @@ app_init_nics(void)
}
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
/* Init TX queues */
if (app.nic_tx_port_mask[port] == 1) {

View File

@ -127,7 +127,6 @@ struct cpu_aff_arg{
static const struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -1073,7 +1072,6 @@ main(int argc, char **argv)
/* init one TX queue on each port */
fflush(stdout);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.tx_offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),

View File

@ -178,7 +178,6 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_CRC_STRIP),
},
@ -236,7 +235,6 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
}
txq_conf = info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = port_conf.txmode.offloads;
for (q = 0; q < tx_rings; q ++) {
retval = rte_eth_tx_queue_setup(port, q, nb_txd,

View File

@ -26,7 +26,6 @@
struct rte_eth_conf eth_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {

View File

@ -708,7 +708,6 @@ rte_netmap_init_port(uint16_t portid, const struct rte_netmap_port_conf *conf)
rxq_conf = dev_info.default_rxconf;
rxq_conf.offloads = conf->eth_conf->rxmode.offloads;
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = conf->eth_conf->txmode.offloads;
for (i = 0; i < conf->nr_tx_rings; i++) {
ret = rte_eth_tx_queue_setup(portid, i, tx_slots,

View File

@ -35,11 +35,7 @@ volatile uint8_t quit_signal;
static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf_default = {
.rxmode = {
.ignore_offload_bitfield = 1,
},
};
static struct rte_eth_conf port_conf_default;
struct worker_thread_args {
struct rte_ring *ring_in;
@ -293,7 +289,6 @@ configure_eth_port(uint16_t port_id)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf.txmode.offloads;
for (q = 0; q < txRings; q++) {
ret = rte_eth_tx_queue_setup(port_id, q, nb_txd,

View File

@ -306,7 +306,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_CRC_STRIP),
},
@ -3597,7 +3596,6 @@ main(int argc, char **argv)
fflush(stdout);
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
socketid, txconf);

View File

@ -50,7 +50,6 @@ static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
};
@ -217,11 +216,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
/* Allocate and set up 1 TX queue per Ethernet port. */
for (q = 0; q < tx_rings; q++) {
/* Setup txq_flags */
struct rte_eth_txconf *txconf;
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = port_conf.txmode.offloads;
retval = rte_eth_tx_queue_setup(port, q, nb_txd,

View File

@ -56,7 +56,6 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_CRC_STRIP),
},
@ -351,7 +350,6 @@ main(int argc, char **argv)
rte_exit(EXIT_FAILURE, "Port %d RX queue setup error (%d)\n", port_rx, ret);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(port_rx, NIC_TX_QUEUE, nb_txd,
rte_eth_dev_socket_id(port_rx),
@ -383,7 +381,6 @@ main(int argc, char **argv)
rte_exit(EXIT_FAILURE, "Port %d RX queue setup error (%d)\n", port_tx, ret);
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(port_tx, NIC_TX_QUEUE, nb_txd,
rte_eth_dev_socket_id(port_tx),

View File

@ -59,7 +59,6 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -96,7 +95,6 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
tx_conf.tx_free_thresh = 0;
tx_conf.tx_rs_thresh = 0;
tx_conf.tx_deferred_start = 0;
tx_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
/* init port */
RTE_LOG(INFO, APP, "Initializing port %"PRIu16"... ", portid);

View File

@ -24,7 +24,6 @@
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -82,7 +81,6 @@ void configure_eth_port(uint16_t port_id)
/* Initialize the port's TX queue */
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = local_port_conf.txmode.offloads;
ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
rte_eth_dev_socket_id(port_id),

View File

@ -20,7 +20,6 @@
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
};
@ -104,7 +103,6 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf.txmode.offloads;
for (q = 0; q < tx_rings; q++) {
retval = rte_eth_tx_queue_setup(port, q, nb_txd,

View File

@ -97,7 +97,6 @@ init_port(uint16_t port_num)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.ignore_offload_bitfield = 1,
},
};
const uint16_t rx_rings = 1, tx_rings = num_nodes;
@ -139,7 +138,6 @@ init_port(uint16_t port_num)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf.txmode.offloads;
for (q = 0; q < tx_rings; q++) {
retval = rte_eth_tx_queue_setup(port_num, q, tx_ring_size,

View File

@ -20,7 +20,6 @@
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
};
@ -68,7 +67,6 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
}
txconf = dev_info.default_txconf;
txconf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf.offloads = port_conf.txmode.offloads;
/* Allocate and set up 1 TX queue per Ethernet port. */
for (q = 0; q < tx_rings; q++) {

View File

@ -69,7 +69,6 @@ uint8_t tep_filter_type[] = {RTE_TUNNEL_FILTER_IMAC_TENID,
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
.offloads = DEV_RX_OFFLOAD_CRC_STRIP,
},
.txmode = {
@ -131,7 +130,6 @@ vxlan_port_init(uint16_t port, struct rte_mempool *mbuf_pool)
rxconf = &dev_info.default_rxconf;
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
if (!rte_eth_dev_is_valid_port(port))
return -1;

View File

@ -116,7 +116,6 @@ static struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
/*
* VLAN strip is necessary for 1G NIC such as I350,
* this fixes bug of ipv4 forwarding in guest can't
@ -256,7 +255,6 @@ port_init(uint16_t port)
rxconf = &dev_info.default_rxconf;
txconf = &dev_info.default_txconf;
rxconf->rx_drop_en = 1;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
/*configure the number of supported virtio devices based on VMDQ limits */
num_devices = dev_info.max_vmdq_pools;

View File

@ -47,7 +47,6 @@ static volatile bool force_quit;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.max_rx_pkt_len = ETHER_MAX_LEN,
.ignore_offload_bitfield = 1,
},
};
@ -83,7 +82,6 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
}
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = port_conf.txmode.offloads;
/* Allocate and set up 1 TX queue per Ethernet port. */
for (q = 0; q < tx_rings; q++) {

View File

@ -65,7 +65,6 @@ static const struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
},
.txmode = {
@ -237,7 +236,6 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
rxconf = &dev_info.default_rxconf;
rxconf->rx_drop_en = 1;
txconf = &dev_info.default_txconf;
txconf->txq_flags = ETH_TXQ_FLAGS_IGNORE;
txconf->offloads = port_conf.txmode.offloads;
for (q = 0; q < rxRings; q++) {
retval = rte_eth_rx_queue_setup(port, q, rxRingSize,

View File

@ -71,7 +71,6 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
.ignore_offload_bitfield = 1,
},
.txmode = {
.mq_mode = ETH_MQ_TX_VMDQ_DCB,
@ -290,7 +289,6 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
}
txq_conf = dev_info.default_txconf;
txq_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
txq_conf.offloads = port_conf.txmode.offloads;
for (q = 0; q < num_queues; q++) {
retval = rte_eth_tx_queue_setup(port, q, txRingSize,

View File

@ -974,41 +974,6 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
}
}
/**
* A conversion function from rxmode bitfield API.
*/
static void
rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
uint64_t *rx_offloads)
{
uint64_t offloads = 0;
if (rxmode->header_split == 1)
offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
if (rxmode->hw_ip_checksum == 1)
offloads |= DEV_RX_OFFLOAD_CHECKSUM;
if (rxmode->hw_vlan_filter == 1)
offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
if (rxmode->hw_vlan_strip == 1)
offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
if (rxmode->hw_vlan_extend == 1)
offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
if (rxmode->jumbo_frame == 1)
offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rxmode->hw_strip_crc == 1)
offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
if (rxmode->enable_scatter == 1)
offloads |= DEV_RX_OFFLOAD_SCATTER;
if (rxmode->enable_lro == 1)
offloads |= DEV_RX_OFFLOAD_TCP_LRO;
if (rxmode->hw_timestamp == 1)
offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
if (rxmode->security == 1)
offloads |= DEV_RX_OFFLOAD_SECURITY;
*rx_offloads = offloads;
}
const char * __rte_experimental
rte_eth_dev_rx_offload_name(uint64_t offload)
{
@ -1095,14 +1060,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return -EBUSY;
}
/*
* Convert between the offloads API to enable PMDs to support
* only one of them.
*/
if (dev_conf->rxmode.ignore_offload_bitfield == 0)
rte_eth_convert_rx_offload_bitfield(
&dev_conf->rxmode, &local_conf.rxmode.offloads);
/* Copy the dev_conf parameter into the dev structure */
memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
@ -1547,14 +1504,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
rx_conf = &dev_info.default_rxconf;
local_conf = *rx_conf;
if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
/**
* Reflect port offloads to queue offloads in order for
* offloads to not be discarded.
*/
rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
&local_conf.offloads);
}
/*
* If an offloading has already been enabled in
@ -1596,55 +1545,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return eth_err(port_id, ret);
}
/**
* Convert from tx offloads to txq_flags.
*/
static void
rte_eth_convert_tx_offload(const uint64_t tx_offloads, uint32_t *txq_flags)
{
uint32_t flags = 0;
if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
if (tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
flags |= ETH_TXQ_FLAGS_NOREFCOUNT | ETH_TXQ_FLAGS_NOMULTMEMP;
*txq_flags = flags;
}
/**
* A conversion function from txq_flags API.
*/
static void
rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
{
uint64_t offloads = 0;
if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
if ((txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT) &&
(txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP))
offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
*tx_offloads = offloads;
}
int
rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
@ -1707,15 +1607,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
if (tx_conf == NULL)
tx_conf = &dev_info.default_txconf;
/*
* Convert between the offloads API to enable PMDs to support
* only one of them.
*/
local_conf = *tx_conf;
if (!(tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)) {
rte_eth_convert_txq_flags(tx_conf->txq_flags,
&local_conf.offloads);
}
/*
* If an offloading has already been enabled in
@ -2493,7 +2385,6 @@ void
rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
{
struct rte_eth_dev *dev;
struct rte_eth_txconf *txconf;
const struct rte_eth_desc_lim lim = {
.nb_max = UINT16_MAX,
.nb_min = 0,
@ -2515,9 +2406,6 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->nb_tx_queues = dev->data->nb_tx_queues;
dev_info->dev_flags = &dev->data->dev_flags;
txconf = &dev_info->default_txconf;
/* convert offload to txq_flags to support legacy app */
rte_eth_convert_tx_offload(txconf->offloads, &txconf->txq_flags);
}
int
@ -3958,7 +3846,6 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo)
{
struct rte_eth_dev *dev;
struct rte_eth_txconf *txconf = &qinfo->conf;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
@ -3975,8 +3862,6 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
memset(qinfo, 0, sizeof(*qinfo));
dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
/* convert offload to txq_flags to support legacy app */
rte_eth_convert_tx_offload(txconf->offloads, &txconf->txq_flags);
return 0;
}

View File

@ -326,7 +326,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if jumbo_frame enabled. */
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@ -334,33 +334,6 @@ struct rte_eth_rxmode {
* structure are allowed to be set.
*/
uint64_t offloads;
__extension__
/**
* Below bitfield API is obsolete. Application should
* enable per-port offloads using the offload field
* above.
*/
uint16_t header_split : 1, /**< Header Split enable. */
hw_ip_checksum : 1, /**< IP/UDP/TCP checksum offload enable. */
hw_vlan_filter : 1, /**< VLAN filter enable. */
hw_vlan_strip : 1, /**< VLAN strip enable. */
hw_vlan_extend : 1, /**< Extended VLAN enable. */
jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */
hw_strip_crc : 1, /**< Enable CRC stripping by hardware. */
enable_scatter : 1, /**< Enable scatter packets rx handler */
enable_lro : 1, /**< Enable LRO */
hw_timestamp : 1, /**< Enable HW timestamp */
security : 1, /**< Enable rte_security offloads */
/**
* When set the offload bitfield should be ignored.
* Instead per-port Rx offloads should be set on offloads
* field above.
* Per-queue offloads shuold be set on rte_eth_rxq_conf
* structure.
* This bit is temporary till rxmode bitfield offloads API will
* be deprecated.
*/
ignore_offload_bitfield : 1;
};
/**
@ -707,28 +680,6 @@ struct rte_eth_rxconf {
uint64_t offloads;
};
#define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
#define ETH_TXQ_FLAGS_NOREFCOUNT 0x0002 /**< refcnt can be ignored */
#define ETH_TXQ_FLAGS_NOMULTMEMP 0x0004 /**< all bufs come from same mempool */
#define ETH_TXQ_FLAGS_NOVLANOFFL 0x0100 /**< disable VLAN offload */
#define ETH_TXQ_FLAGS_NOXSUMSCTP 0x0200 /**< disable SCTP checksum offload */
#define ETH_TXQ_FLAGS_NOXSUMUDP 0x0400 /**< disable UDP checksum offload */
#define ETH_TXQ_FLAGS_NOXSUMTCP 0x0800 /**< disable TCP checksum offload */
#define ETH_TXQ_FLAGS_NOOFFLOADS \
(ETH_TXQ_FLAGS_NOVLANOFFL | ETH_TXQ_FLAGS_NOXSUMSCTP | \
ETH_TXQ_FLAGS_NOXSUMUDP | ETH_TXQ_FLAGS_NOXSUMTCP)
#define ETH_TXQ_FLAGS_NOXSUMS \
(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
ETH_TXQ_FLAGS_NOXSUMTCP)
/**
* When set the txq_flags should be ignored,
* instead per-queue Tx offloads will be set on offloads field
* located on rte_eth_txq_conf struct.
* This flag is temporary till the rte_eth_txq_conf.txq_flags
* API will be deprecated.
*/
#define ETH_TXQ_FLAGS_IGNORE 0x8000
/**
* A structure used to configure a TX ring of an Ethernet port.
*/
@ -738,7 +689,6 @@ struct rte_eth_txconf {
uint16_t tx_free_thresh; /**< Start freeing TX buffers if there are
less free descriptors than this value. */
uint32_t txq_flags; /**< Set flags for the Tx queue */
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
* Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
@ -1680,12 +1630,6 @@ int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* The *tx_rs_thresh* value should be less or equal then
* *tx_free_thresh* value, and both of them should be less then
* *nb_tx_desc* - 3.
* - The *txq_flags* member contains flags to pass to the TX queue setup
* function to configure the behavior of the TX queue. This should be set
* to 0 if no special configuration is required.
* This API is obsolete and will be deprecated. Applications
* should set it to ETH_TXQ_FLAGS_IGNORE and use
* the offloads field below.
* - The *offloads* member contains Tx offloads to be enabled.
* If an offloading set in tx_conf->offloads
* hasn't been set in the input argument eth_conf->txmode.offloads
@ -4097,7 +4041,7 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
*
* If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
* invoke this function concurrently on the same tx queue without SW lock.
* @see rte_eth_dev_info_get, struct rte_eth_txconf::txq_flags
* @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
*
* @see rte_eth_tx_prepare to perform some prior checks or adjustments
* for offloads.