Commit Graph

172 Commits

Author SHA1 Message Date
Feifei Wang
1b73c2d1a1 net/i40e: remove redundant number of packets check
For i40e_xmit_pkts_vec_xx function, it checks nb_pkts to ensure nb_pkts
does not cross rs_thresh.

However, in i40e_xmit_fixed_burst_vec_xx function, this check will be
performed again. To improve code, delete this redundant check.

Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
2022-04-18 07:47:18 +02:00
Stephen Hemminger
06c047b680 remove unnecessary null checks
Functions like free, rte_free, and rte_mempool_free
already handle NULL pointer so the checks here are not necessary.

Remove redundant NULL pointer checks before free functions
found by nullfree.cocci

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2022-02-12 12:07:48 +01:00
Josh Soref
7be78d0279 fix spelling in comments and strings
The tool comes from https://github.com/jsoref

Signed-off-by: Josh Soref <jsoref@gmail.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2022-01-11 12:16:53 +01:00
Ruifeng Wang
c4d3e8fbe4 net/i40e: fix risk in descriptor read in scalar Rx
Rx descriptor is 16B/32B in size. If the DD bit is set, it indicates
that the rest of the descriptor words have valid values. Hence, the
word containing DD bit must be read first before reading the rest of
the descriptor words.

Since the entire descriptor is not read atomically, on relaxed memory
ordered systems like Aarch64, read of the word containing DD field
could be reordered after read of other words.

Read barrier is inserted between read of the word with DD field
and read of other words. The barrier ensures that the fetched data
is correct.

Testpmd single core test showed no performance drop on x86 or N1SDP.
On ThunderX2, 22% performance regression was observed.

Fixes: 7b0cf70135 ("net/i40e: support ARM platform")
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
2021-11-11 13:29:23 +01:00
Jie Wang
8cc79a1636 net/i40e: fix forward outer IPv6 VXLAN
Testpmd forwards packets in checksum mode that it need to calculate
the checksum of each layer's protocol. Then it will fill flags and
header length into mbuf.

In process_outer_cksums, HW calculates the outer checksum if
tx_offloads contains outer UDP checksum otherwise SW calculates
the outer checksum.

When tx_offloads contains outer UDP checksum or outer IPv4 checksum,
mbuf will be filled with correct header length.

This patch added outer UDP checksum in tx_offload_capa and
I40E_TX_OFFLOAD_MASK, when we set csum hw outer-udp on that the
engine can forward outer IPv6 VXLAN packets.

Fixes: 7497d3e2f7 ("net/i40e: convert to new Tx offloads API")
Cc: stable@dpdk.org

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2021-11-05 05:31:22 +01:00
Olivier Matz
daa02b5cdd mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-24 13:37:43 +02:00
Olivier Matz
5b63493241 mbuf: mark old VLAN offload flags as deprecated
The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
marked as deprecated since commit 380a7aab1a ("mbuf: rename deprecated
VLAN flags") (2017). But they were not using the RTE_DEPRECATED
macro, because it did not exist at this time. Add it, and replace
usage of these flags.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-24 13:30:40 +02:00
Ferruh Yigit
295968d174 ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-22 18:15:38 +02:00
Ferruh Yigit
ede6356582 drivers/net: fix removing jumbo offload flag
After DEV_RX_OFFLOAD_JUMBO_FRAME flag removed, drivers give jumbo frame
decisions based on MTU value checks, but some of the checks were wrong
by mistake, causing device initialization to fail, fixing them.

Fixes: b563c14212 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
2021-10-22 17:44:18 +02:00
Ferruh Yigit
b563c14212 ethdev: remove jumbo offload flag
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.

And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.

Removing this additional configuration for simplification.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2021-10-18 19:20:21 +02:00
Ferruh Yigit
1bb4a528c4 ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.

Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.

These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.

Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
  'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
  Ethernet frame overhead, and this overhead may be different from
  device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
  which adds additional confusion and some APIs and PMDs already
  discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
  field, this adds configuration complexity for application.

As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.

For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.

When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.

Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 19:20:20 +02:00
Konstantin Ananyev
8d7d4fcdca ethdev: change input parameters for Rx queue count
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:58 +02:00
Andrew Rybchenko
6c31a8c20a ethdev: remove legacy Rx descriptor done API
rte_eth_rx_descriptor_status() should be used as a replacement.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-10-11 16:44:57 +02:00
Yunjian Wang
e3188d5f99 net/i40e: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:

rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>i40e_dev_rx_queue_release
rte_eth_dev_close
-->i40e_dev_close
---->i40e_dev_free_queues
------>i40e_dev_rx_queue_release
      (not been called due to nb_rx_queues and nb_tx_queues are 0)

And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.

Fixes: 460d167958 ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-07 13:38:16 +02:00
Xueming Li
7483341ae5 ethdev: change queue release callback
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.

To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-06 19:16:03 +02:00
Robin Zhang
fe2a571c70 net/i40e: remove i40evf
The default VF driver for Intel 700 Series Ethernet Controller already
switch to iavf in DPDK 21.05. And i40evf is no need to maintain now,
so remove i40evf related code.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-05 12:31:37 +02:00
Qiming Chen
4b458675d3 net/i40e: fix mbuf leak
A local test found that repeated port start and stop operations during
the continuous SSE vector bufflist receiving process will cause the mbuf
resource to run out. The final positioning is when the port is stopped,
the mbuf of the pkt_first_seg pointer is not released. Resources leak.
The patch scheme is to judge whether the pointer is empty when the port
is stopped, and release the corresponding mbuf if it is not empty.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-09-13 04:30:01 +02:00
Ruifeng Wang
1a3f6cde64 net/i40e: fix clang warning on non-x86
Build on aarch64 with clang-10 has warning:
i40e_rxtx.c:3228:1:
	warning: unused function 'get_avx_supported' [-Wunused-function]

The function is used in x86 specific path. Moved it into ifdef
to fix build on non-x86.

Fixes: c30751afc3 ("net/i40e: fix data path selection in secondary process")
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-08-10 09:48:33 +02:00
Joyce Kong
8649e23566 net/i40e: replace SMP barrier with thread fence in Rx
Simply replace the SMP barrier with atomic thread fence for
i40e hw ring scan, if there is no synchronization point.

Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-07-16 10:11:30 +02:00
Anatoly Burakov
6afc4baf4f eal: use callbacks for power monitoring comparison
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.

This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.

Existing implementations are adjusted to follow the new semantics.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
2021-07-09 21:13:13 +02:00
Joyce Kong
65b2ec7b4f net/i40e: fix descriptor scan on Arm
For Arm platforms, reading descs can get re-ordered, then the
status of DD bits will be discontinuous, so add the logic to
only process continuous descs by checking DD bits.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
2021-07-09 05:05:19 +02:00
Dapeng Yu
e391a7b7f8 net/i40e: fix multi-process shared data
The rte_eth_devices array is not in share memory, it should not be
referenced by i40e_adapter which is shared by primary and secondary.
Any process set i40e_adapter->eth_dev will corrupt another process's
context.

The patch removed the field "eth_dev" from i40e_adapter.
Now, when the data paths try to access the rte_eth_dev_data instance,
they should replace adapter->eth_dev->data with adapter->pf.dev_data.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-07-06 04:59:01 +02:00
Feifei Wang
95e7bb6a5f net/i40e: improve scalar Tx performance
For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means
per-queue all mbufs come from the same mempool and have refcnt = 1.

Thus we can use bulk free of the buffers when mbuf fast free mode is
enabled.

Following are the test results with this patch:

MRR L3FWD Test:
two ports & bi-directional flows & one core
RX API: i40e_recv_pkts_bulk_alloc
TX API: i40e_xmit_pkts_simple
ring_descs_size = 1024;
Ring_I40E_TX_MAX_FREE_SZ = 64;
tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH = 32;
tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH = 32;

For scalar path in arm platform with default 'tx_rs_thresh':
In n1sdp, performance is improved by 7.9%;
In thunderx2, performance is improved by 7.6%.

For scalar path in x86 platform with default 'tx_rs_thresh':
performance is improved by 4.7%.

Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2021-07-06 04:59:01 +02:00
Dapeng Yu
c30751afc3 net/i40e: fix data path selection in secondary process
The flag use_avx2 and use_avx512 are defined as local variables, they
will not be aware by the secondary process, then wrong data path is
selected. Fix the issue by moving them into struct i40e_adapter.

Fixes: 6ada10deac ("net/i40e: remove devarg use-latest-supported-vec")
Fixes: e6a6a13891 ("net/i40e: add AVX512 vector path")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-07-04 16:57:10 +02:00
Alvin Zhang
752ab161bd net/i40e: fix offload flag checking in simple Tx
Tx offload flags 'PKT_TX_IPV6, PKT_TX_IPV4, PKT_TX_OUTER_IPV6,
PKT_TX_OUTER_IPV4' are supported in simple datapath.

This patch removes these offload flags from packet checking in simple
Tx datapath and defines 2 macro I40E_TX_OFFLOAD_SIMPLE_SUP_MASK
and I40E_TX_OFFLOAD_SIMPLE_NOTSUP_MASK.

Fixes: 146ffa81d0 ("net/i40e: add Tx preparation for simple Tx datapath")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
2021-05-12 10:50:36 +02:00
Chengwen Feng
70077b8630 net/i40e: remove redundant VSI check in Tx queue setup
The VSI pointer is always valid, so there is no need to judge its
validity.

Fixes: b6583ee402 ("i40e: full VMDQ pools support")
Cc: stable@dpdk.org

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-04-23 10:59:40 +02:00
Leyi Rong
146ffa81d0 net/i40e: add Tx preparation for simple Tx datapath
Introduce i40e_simple_prep_pkts() as the preparation function for
simple Tx data path, as it's for sanity check for simple Tx.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-04-20 16:01:15 +02:00
Qi Zhang
1afe24acb4 net/i40e: refine debug build option
1. replace RTE_LIBRTE_I40E_DEBUG_RX with RTE_ETHDEV_DEBUG_RX.
2. replace RTE_LIBRTE_I40E_DEBUG_TX with RTE_ETHDEV_DEBUG_TX.
3. merge RTE_LIBRTE_I40E_DEBUG_TX_FREE and RTE_LIBRTE_ETHDEV_DEBUG
   into RTE_ETHDEV_DEBUG_TX

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-04-01 16:10:20 +02:00
Lance Richardson
e8a419d6de mbuf: rename outer IP checksum macro
Rename PKT_RX_EIP_CKSUM_BAD to PKT_RX_OUTER_IP_CKSUM_BAD and
deprecate the original name. The new name is better aligned
with existing PKT_RX_OUTER_* flags, which should help reduce
confusion about its use.

Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-03-02 10:57:28 +01:00
Bruce Richardson
df96fd0d73 ethdev: make driver-only headers private
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
2021-01-29 20:59:09 +01:00
Anatoly Burakov
f400ea0b4c eal: rename power monitor condition member
The `data_sz` name is fine, but it looks out of place because nothing
else has "data" prefix in that structure. Rename it to "size", as well
as add more clarity to the comments around each struct member.

Fixes: 6a17919b0e ("eal: change power intrinsics API")

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-01-29 15:29:48 +01:00
David Marchand
c69b702e5f net/i40e: remove vector config
This config item is not exposed anymore now that we removed make
support.
Note: all architectures provide vectorised functions.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-01-19 03:30:31 +01:00
Steve Yang
c12f0976cb net/i40e: fix jumbo frame flag condition
The jumbo frame uses the 'RTE_ETHER_MAX_LEN' as boundary condition,
but the Ether overhead is larger than 18 when it supports dual VLAN tags.
That will cause the jumbo flag rx offload is wrong when MTU size is
'RTE_ETHER_MTU'.

This fix will change the boundary condition with 'RTE_ETHER_MTU' and
overhead, that perhaps impacts the cases of the jumbo frame related.

Fixes: c1715402df ("i40evf: fix jumbo frame support")
Fixes: 43e5488c0a ("net/i40e: support MTU configuration")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Fixes: c3ac7c5b0b ("net/i40e: convert to new Rx offloads API")
Cc: stable@dpdk.org

Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
2021-01-19 03:30:14 +01:00
Leyi Rong
5171b4ee6b net/i40e: optimize Tx by using AVX512
Optimize Tx path by using AVX512 instructions and vectorize the
tx free bufs process.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2021-01-19 03:30:13 +01:00
Leyi Rong
e6a6a13891 net/i40e: add AVX512 vector path
Add AVX512 support for i40e PMD. This patch adds i40e_rxtx_vec_avx512.c
to support i40e AVX512 vPMD.

This patch aims to enable AVX512 on i40e vPMD. Main changes are focus
on Rx path compared with AVX2 vPMD.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2021-01-19 03:30:11 +01:00
Leyi Rong
6ada10deac net/i40e: remove devarg use-latest-supported-vec
As eal parameter --force-max-simd-bitwidth is already introduced,
to make it more clear when setting rx/tx function, remove
devarg use-latest-supported-vec support.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2021-01-19 03:29:53 +01:00
Liang Ma
a683abf90a net/i40e: implement power management API
Implement support for the power management API by implementing a
`get_monitor_addr` function that will return an address of an RX ring's
status bit.

Signed-off-by: Liang Ma <liang.j.ma@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
2021-01-19 00:00:28 +01:00
Ciara Power
6e3343f435 net/i40e: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-19 16:45:02 +02:00
Radu Nicolau
0a65bf8d41 net/i40e: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2020-10-13 14:37:15 +02:00
Phil Yang
f0f5d844d1 eal: remove deprecated coherent IO memory barriers
Since the 20.08 release deprecated rte_cio_*mb APIs because these APIs
provide the same functionality as rte_io_*mb APIs on all platforms, so
remove them and use rte_io_*mb instead.

Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-09-23 13:40:26 +02:00
Ciara Power
ec260aa3ad config: remove default configs used with make
Make is not supported for compiling DPDK, the config files are no
longer needed.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-09-08 00:11:30 +02:00
Chenmin Sun
febc61d350 net/i40e: optimize flow director update rate
This patch optimized the fdir update rate for i40e PF, by tracking
whether the fdir rule being inserted into the guaranteed space
or shared space.
For the flows that are inserted to the guaranteed space, we assume
that the insertion will always succeed as the hardware only report
the "no enough space left" error. In this case, the software can
directly return success and no need to retrieve the result from
the hardware. When destroying a flow, we also assume the operation
will succeed as the software has checked the flow is indeed in
the hardware.
See the fdir programming status descriptor format in the datasheet
for more details.

Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2020-07-21 13:54:54 +02:00
Chenmin Sun
f97d6192d5 net/i40e: support flow director space tracking
This patch introduces a FDIR flow management for guaranteed/shared
space tracking.
The fdir space is reported by the
i40e_hw_capabilities.fd_filters_guaranteed and fd_filters_best_effort.
The fdir space is managed by hardware and now is tracking in software.
The management algorithm is controlled by the GLQF_CTL.INVALPRIO.
Detailed implementation please check in the datasheet and the
description of struct i40e_fdir_info.fdir_invalprio.

This patch changes the global register GLQF_CTL. Therefore, when devarg
``support-multi-driver`` is set, the patch will not take effect to
avoid affecting the normal behavior of other i40e drivers, e.g., Linux
kernel driver.

Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2020-07-21 13:54:54 +02:00
Renata Saiakhova
460d167958 drivers/net: delete HW rings while freeing queues
Delete memzones for HW rings in igb and ixgbe while freeing queues

Updated igb, ixgbe, i40e, ice & em drivers.

Signed-off-by: Renata Saiakhova <renata.saiakhova@ekinops.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-07-11 06:18:54 +02:00
Qiming Yang
6cc330b709 net/i40e: fix queue related exception handling
There should have different behavior in queue start fail and stop fail
case.  When queue start fail, all the next actions should be terminated
and then started queues should be cleared. But for queue stop stage, one
queue stop fail should not end other queues stop. This patch fixed that
issue in PF and VF.

Fixes: b6583ee402 ("i40e: full VMDQ pools support")
Fixes: 3f6a696f10 ("i40evf: queue start and stop")
Cc: stable@dpdk.org

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-05-19 17:12:16 +02:00
Thomas Monjalon
ce6427ddca replace cold attributes
The new macro __rte_cold, for compiler hinting,
is now used where appropriate for consistency.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
2020-04-16 18:30:58 +02:00
Gavin Hu
bade47a757 net/i40e: relax barrier in Tx
To keep ordering of mixed accesses, rte_cio is sufficient.
The rte_io barrier inside the I40E_PCI_REG_WRITE is overkill.[1]

[1] http://inbox.dpdk.org/dev/CALBAE1M-ezVWCjqCZDBw+MMDEC4O9
qf0Kpn89EMdGDajepKoZQ@mail.gmail.com

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2020-03-18 10:21:41 +01:00
Beilei Xing
ba950e6276 net/i40e: fix unchecked Tx cleanup error
Coverity complains of unchecked return value warning of
i40e_xmit_cleanup, while this cleanup is opportunistic and will not
cause problems if it fails. So instead of checking the return value of
i40e_xmit_cleanup and return in case of cleanup failure, we directly
cast it to void function to make the Coverity happy.

Coverity issue: 353617
Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-02-14 12:42:13 +01:00
Chenxu Di
6a8defc552 net/i40e: cleanup Tx buffers
Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-01-17 19:46:26 +01:00
Xiaoyun Li
29b2ba82c4 net/i40e: fix Tx when TSO is enabled
Hardware limits that max buffer size per tx descriptor should be
(16K-1)B. So when TSO enabled, the mbuf data size may exceed the
limit and cause malicious behavior to the NIC. This patch fixes
this issue by using more tx descs for this kind of large buffer.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
2020-01-17 19:46:01 +01:00