Commit Graph

138 Commits

Author SHA1 Message Date
Leyi Rong
e6a6a13891 net/i40e: add AVX512 vector path
Add AVX512 support for i40e PMD. This patch adds i40e_rxtx_vec_avx512.c
to support i40e AVX512 vPMD.

This patch aims to enable AVX512 on i40e vPMD. Main changes are focus
on Rx path compared with AVX2 vPMD.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2021-01-19 03:30:11 +01:00
Leyi Rong
6ada10deac net/i40e: remove devarg use-latest-supported-vec
As eal parameter --force-max-simd-bitwidth is already introduced,
to make it more clear when setting rx/tx function, remove
devarg use-latest-supported-vec support.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2021-01-19 03:29:53 +01:00
Liang Ma
a683abf90a net/i40e: implement power management API
Implement support for the power management API by implementing a
`get_monitor_addr` function that will return an address of an RX ring's
status bit.

Signed-off-by: Liang Ma <liang.j.ma@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
2021-01-19 00:00:28 +01:00
Ciara Power
6e3343f435 net/i40e: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-19 16:45:02 +02:00
Radu Nicolau
0a65bf8d41 net/i40e: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2020-10-13 14:37:15 +02:00
Phil Yang
f0f5d844d1 eal: remove deprecated coherent IO memory barriers
Since the 20.08 release deprecated rte_cio_*mb APIs because these APIs
provide the same functionality as rte_io_*mb APIs on all platforms, so
remove them and use rte_io_*mb instead.

Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-09-23 13:40:26 +02:00
Ciara Power
ec260aa3ad config: remove default configs used with make
Make is not supported for compiling DPDK, the config files are no
longer needed.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-09-08 00:11:30 +02:00
Chenmin Sun
febc61d350 net/i40e: optimize flow director update rate
This patch optimized the fdir update rate for i40e PF, by tracking
whether the fdir rule being inserted into the guaranteed space
or shared space.
For the flows that are inserted to the guaranteed space, we assume
that the insertion will always succeed as the hardware only report
the "no enough space left" error. In this case, the software can
directly return success and no need to retrieve the result from
the hardware. When destroying a flow, we also assume the operation
will succeed as the software has checked the flow is indeed in
the hardware.
See the fdir programming status descriptor format in the datasheet
for more details.

Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2020-07-21 13:54:54 +02:00
Chenmin Sun
f97d6192d5 net/i40e: support flow director space tracking
This patch introduces a FDIR flow management for guaranteed/shared
space tracking.
The fdir space is reported by the
i40e_hw_capabilities.fd_filters_guaranteed and fd_filters_best_effort.
The fdir space is managed by hardware and now is tracking in software.
The management algorithm is controlled by the GLQF_CTL.INVALPRIO.
Detailed implementation please check in the datasheet and the
description of struct i40e_fdir_info.fdir_invalprio.

This patch changes the global register GLQF_CTL. Therefore, when devarg
``support-multi-driver`` is set, the patch will not take effect to
avoid affecting the normal behavior of other i40e drivers, e.g., Linux
kernel driver.

Signed-off-by: Chenmin Sun <chenmin.sun@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2020-07-21 13:54:54 +02:00
Renata Saiakhova
460d167958 drivers/net: delete HW rings while freeing queues
Delete memzones for HW rings in igb and ixgbe while freeing queues

Updated igb, ixgbe, i40e, ice & em drivers.

Signed-off-by: Renata Saiakhova <renata.saiakhova@ekinops.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-07-11 06:18:54 +02:00
Qiming Yang
6cc330b709 net/i40e: fix queue related exception handling
There should have different behavior in queue start fail and stop fail
case.  When queue start fail, all the next actions should be terminated
and then started queues should be cleared. But for queue stop stage, one
queue stop fail should not end other queues stop. This patch fixed that
issue in PF and VF.

Fixes: b6583ee402 ("i40e: full VMDQ pools support")
Fixes: 3f6a696f10 ("i40evf: queue start and stop")
Cc: stable@dpdk.org

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-05-19 17:12:16 +02:00
Thomas Monjalon
ce6427ddca replace cold attributes
The new macro __rte_cold, for compiler hinting,
is now used where appropriate for consistency.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
2020-04-16 18:30:58 +02:00
Gavin Hu
bade47a757 net/i40e: relax barrier in Tx
To keep ordering of mixed accesses, rte_cio is sufficient.
The rte_io barrier inside the I40E_PCI_REG_WRITE is overkill.[1]

[1] http://inbox.dpdk.org/dev/CALBAE1M-ezVWCjqCZDBw+MMDEC4O9
qf0Kpn89EMdGDajepKoZQ@mail.gmail.com

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2020-03-18 10:21:41 +01:00
Beilei Xing
ba950e6276 net/i40e: fix unchecked Tx cleanup error
Coverity complains of unchecked return value warning of
i40e_xmit_cleanup, while this cleanup is opportunistic and will not
cause problems if it fails. So instead of checking the return value of
i40e_xmit_cleanup and return in case of cleanup failure, we directly
cast it to void function to make the Coverity happy.

Coverity issue: 353617
Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-02-14 12:42:13 +01:00
Chenxu Di
6a8defc552 net/i40e: cleanup Tx buffers
Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-01-17 19:46:26 +01:00
Xiaoyun Li
29b2ba82c4 net/i40e: fix Tx when TSO is enabled
Hardware limits that max buffer size per tx descriptor should be
(16K-1)B. So when TSO enabled, the mbuf data size may exceed the
limit and cause malicious behavior to the NIC. This patch fixes
this issue by using more tx descs for this kind of large buffer.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
2020-01-17 19:46:01 +01:00
Haiyue Wang
8dedb54699 ethdev: enhance burst mode information API
Change the type of burst mode information from bit field to free string
data, so that each PMD can describe the Rx/Tx busrt functions flexibly.

Fixes: eb5902504a ("ethdev: add API for getting burst mode information")
Fixes: 6b6609f68c ("net/i40e: support Rx/Tx burst mode info")
Fixes: e9a10e6c21 ("net/ice: support Rx/Tx burst mode info")
Fixes: 7fe108edcf ("app/testpmd: show Rx/Tx burst mode description")

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ray Kinsella <ray.kinsella@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-11-08 23:15:04 +01:00
Harry van Haaren
cc46d3d368 net/i40e: support flow director on SSE Rx
This commit adds an implementation to the SSE vector implementation of
RX routine and moves some common defines from a c file to the header
file.

I40e can have 16 and 32 byte descriptors, and the Flow
Director ID data and indication-bit are in different locations
for each size descriptor. The support is implemented in two
separate functions as they require vastly different operations.

The 16B descriptor re-purposes the "filter-status" u32 field
to indicate FDIR ID when the FLM bit is set. No extra loads
are required, however we do have to store to mbuf->fdir.hi,
which is not stored to in the RX path before this patch.

The 32B descriptor requires loading the 2nd 16 bytes of each
descriptor, to get the FLEXBH_STAT and FD Filter ID from
qword3. The resulting data must also be stored to mbuf->fdir.hi,
same as the 16B code path.

Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Mesut Ali Ergin <mesut.a.ergin@intel.com>
2019-10-23 16:43:10 +02:00
Xiao Zhang
01c12d247e net/i40e: fix integer overflow
When configuring i40e rx queue, the temporary variable to store max
packet length is not big enough which leads to integer overflow issue.
This patch fixes the issue by removing the variable and using the
expression directly since the variable is only used once.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
2019-10-23 16:43:09 +02:00
Haiyue Wang
6b6609f68c net/i40e: support Rx/Tx burst mode info
Retrieve burst mode options according to the selected Rx/Tx burst
function name.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
2019-10-23 16:43:09 +02:00
Gavin Hu
3779a64ecd net/i40e: use relaxed and remove duplicate barrier
To guarantee the orderings of successive stores to CIO and MMIO memory,
a lighter weight rte_io_wmb [1] can be used instead of rte_wmb, and since
the I40E_PCI_REG_WRITE API already has an inclusive rte_io_wmb, this
explicit call can be even saved.

[1] http://git.dpdk.org/dpdk/tree/lib/librte_eal/common/include/generic/
rte_atomic.h#n98

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2019-10-08 12:14:30 +02:00
Peng Huang
ba277e3720 net/i40e: fix RSS hash update for X722 VF
This patch fixes X722 VF problem when received packet don't have
HASH value.
1) Packet classifier types update should support X722 VF, not only
 for X722 PF;
2) MAC type is invalid for X722 VF when set packet classifier type,
so move it after MAC type is set correctly;

Fixes: a286ebeb07 ("net/i40e: add dynamic mapping of SW flow types to HW pctypes")
Cc: stable@dpdk.org

Signed-off-by: Peng Huang <peng.huang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2019-07-24 17:16:29 +02:00
Ivan Malov
b51690ec69 mbuf: clarify outer offsets for non-tunnel packets
The default policy for offload-specific fields is that
they are undefined unless the corresponding offloads are
requested in mbuf ol_flags. This is also the case for outer
L2 and L3 length fields which must not be assumed to contain
zeros for non-tunnel packets. The patch clarifies this behaviour
in the comments and also adds appropriate checks to the PMDs which
do not check any tunnel-related offloads before using the said fields.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2019-07-01 17:15:01 +02:00
Andrew Rybchenko
39be55654e net/i40e: fix Tx prepare to set positive rte_errno
Fixes: 3f33e643e5 ("net/i40e: add Tx preparation")
Fixes: bfeed0262b ("net/i40e: check illegal packets")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-06-13 23:54:30 +09:00
David Harton
02ad704708 net/i40e: eliminate weak symbols in data path
Use of weak symbols can hide makefile errors especially when
custom makefiles are used.  Removing the use of weak symbols
to avoid a stub function being linked in production code.

Signed-off-by: David Harton <dharton@cisco.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2019-06-13 23:54:29 +09:00
Olivier Matz
e73e3547ce net: add rte prefix to UDP structure
Add 'rte_' prefix to structures:
- rename struct udp_hdr as struct rte_udp_hdr.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-05-24 13:34:46 +02:00
Olivier Matz
f41b5156fe net: add rte prefix to TCP structure
Add 'rte_' prefix to structures:
- rename struct tcp_hdr as struct rte_tcp_hdr.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-05-24 13:34:46 +02:00
Olivier Matz
09d9ae1ac9 net: add rte prefix to SCTP structure
Add 'rte_' prefix to structures:
- rename struct sctp_hdr as struct rte_sctp_hdr.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-05-24 13:34:46 +02:00
Olivier Matz
35b2d13fd6 net: add rte prefix to ether defines
Add 'RTE_' prefix to defines:
- rename ETHER_ADDR_LEN as RTE_ETHER_ADDR_LEN.
- rename ETHER_TYPE_LEN as RTE_ETHER_TYPE_LEN.
- rename ETHER_CRC_LEN as RTE_ETHER_CRC_LEN.
- rename ETHER_HDR_LEN as RTE_ETHER_HDR_LEN.
- rename ETHER_MIN_LEN as RTE_ETHER_MIN_LEN.
- rename ETHER_MAX_LEN as RTE_ETHER_MAX_LEN.
- rename ETHER_MTU as RTE_ETHER_MTU.
- rename ETHER_MAX_VLAN_FRAME_LEN as RTE_ETHER_MAX_VLAN_FRAME_LEN.
- rename ETHER_MAX_VLAN_ID as RTE_ETHER_MAX_VLAN_ID.
- rename ETHER_MAX_JUMBO_FRAME_LEN as RTE_ETHER_MAX_JUMBO_FRAME_LEN.
- rename ETHER_MIN_MTU as RTE_ETHER_MIN_MTU.
- rename ETHER_LOCAL_ADMIN_ADDR as RTE_ETHER_LOCAL_ADMIN_ADDR.
- rename ETHER_GROUP_ADDR as RTE_ETHER_GROUP_ADDR.
- rename ETHER_TYPE_IPv4 as RTE_ETHER_TYPE_IPv4.
- rename ETHER_TYPE_IPv6 as RTE_ETHER_TYPE_IPv6.
- rename ETHER_TYPE_ARP as RTE_ETHER_TYPE_ARP.
- rename ETHER_TYPE_VLAN as RTE_ETHER_TYPE_VLAN.
- rename ETHER_TYPE_RARP as RTE_ETHER_TYPE_RARP.
- rename ETHER_TYPE_QINQ as RTE_ETHER_TYPE_QINQ.
- rename ETHER_TYPE_ETAG as RTE_ETHER_TYPE_ETAG.
- rename ETHER_TYPE_1588 as RTE_ETHER_TYPE_1588.
- rename ETHER_TYPE_SLOW as RTE_ETHER_TYPE_SLOW.
- rename ETHER_TYPE_TEB as RTE_ETHER_TYPE_TEB.
- rename ETHER_TYPE_LLDP as RTE_ETHER_TYPE_LLDP.
- rename ETHER_TYPE_MPLS as RTE_ETHER_TYPE_MPLS.
- rename ETHER_TYPE_MPLSM as RTE_ETHER_TYPE_MPLSM.
- rename ETHER_VXLAN_HLEN as RTE_ETHER_VXLAN_HLEN.
- rename ETHER_ADDR_FMT_SIZE as RTE_ETHER_ADDR_FMT_SIZE.
- rename VXLAN_GPE_TYPE_IPV4 as RTE_VXLAN_GPE_TYPE_IPV4.
- rename VXLAN_GPE_TYPE_IPV6 as RTE_VXLAN_GPE_TYPE_IPV6.
- rename VXLAN_GPE_TYPE_ETH as RTE_VXLAN_GPE_TYPE_ETH.
- rename VXLAN_GPE_TYPE_NSH as RTE_VXLAN_GPE_TYPE_NSH.
- rename VXLAN_GPE_TYPE_MPLS as RTE_VXLAN_GPE_TYPE_MPLS.
- rename VXLAN_GPE_TYPE_GBP as RTE_VXLAN_GPE_TYPE_GBP.
- rename VXLAN_GPE_TYPE_VBNG as RTE_VXLAN_GPE_TYPE_VBNG.
- rename ETHER_VXLAN_GPE_HLEN as RTE_ETHER_VXLAN_GPE_HLEN.

Do not update the command line library to avoid adding a dependency to
librte_net.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-05-24 13:34:45 +02:00
Qi Zhang
1c5c7c91d3 net/i40e: fix Tx threshold setup
Tx desc's DD status is not cleaned by NIC automatically after packets
have been transmitted until software refill a new packet during next
loop. So when tx_free_thresh + tx_rs_thresh > nb_desc, it is possible
that an outdated DD status be checked as tx_next_dd, then segment fault
happen due to free a NULL mbuf pointer.

Then patch fixes this issue by
1. try to adapt tx_rs_thresh to an aggressive tx_free_thresh.
2. queue setup fail when tx_free_thresh + tx_rs_thresh > nb_desc

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2019-05-21 18:40:06 +02:00
Bruce Richardson
3f1b8bf913 net/i40e: fix dereference before null check in mbuf release
Coverity flags that the txq variable is used before it's checked for NULL.
Also fix typo in error message.

Coverity issue: 195023
Fixes: 24853544c8 ("net/i40e: fix mbuf free in vector Tx")
Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Rami Rosen <ramirose@gmail.com>
2019-04-23 00:15:10 +02:00
Qi Zhang
756cea41e9 net/i40e: fix scattered Rx enabling
No need to add additional vlan tag size for max packet size, since
for i40e, the queue's Rx Max Frame Size (rxq->max_pkt_len) already
includes the vlan header size.

Fixes: a3c83a2527 ("net/i40e: enable runtime queue setup")
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: c1715402df ("i40evf: fix jumbo frame support")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2019-04-12 11:02:02 +02:00
Rami Rosen
3fcde631cb net/i40e: fix config name in comment
This patch fixes I40E RxTx module to use the proper config setting,
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR.

Fixes: 9ed94e5bb0 ("i40e: add vector Rx")
Cc: stable@dpdk.org

Signed-off-by: Rami Rosen <ramirose@gmail.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-12-21 16:22:41 +01:00
Zhirun Yan
ef8b7c505f net/i40e: remove redundant reset of queue number
Before this patch, there are two functions will call
i40e_dev_free_queues to free queues. For rte_eth_dev_close(), its
redundant because of duplication. For rte_eth_dev_reset() its
redundant because of not necessary, since following dev_configure
is required after dev_reset and it will be updated correctly.

This patch removes redundant code in i40e_dev_free_queues().

Fixes: 6b45371283 ("i40e: free queue memory when closing")
Cc: stable@dpdk.org

Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-12-13 23:06:37 +01:00
Didier Pallard
e51da4bbaf net/i40e: revert fix offload not supported mask
This reverts
commit 09a62d7569 ("net/i40e: fix offload not supported mask")

Contrary to what is said in above patch commit log,
I40E_TX_OFFLOAD_NOTSUP_MASK is the mask of Tx offload bits that are part
of PKT_TX_OFFLOAD_MASK but not included in I40E_TX_OFFLOAD_MASK.
Above patch erroneously includes all PKT_RX_OFFLOAD_ bits in the
I40E_TX_OFFLOAD_NOTSUP_MASK, this is not what is expected.
Restore the initial xor that gives the expected result.

Fixes: 09a62d7569 ("net/i40e: fix offload not supported mask")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-12-13 18:17:42 +00:00
Beilei Xing
054d1be48c net/i40e: fix Rx instability with vector mode
Previously, there is instability during vector Rx if descriptor
number is not power of 2, e.g. process hang and some Rx packets
are unexpectedly empty. That's because vector Rx mode assumes Rx
descriptor number is power of 2 when doing bit mask.
This patch allows vector mode only when the number of Rx descriptor
is power of 2.

Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Fixes: a3c83a2527 ("net/i40e: enable runtime queue setup")
Cc: stable@dpdk.org

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2018-11-05 16:50:14 +01:00
Xiaolong Ye
09a62d7569 net/i40e: fix offload not supported mask
Just as the name I40E_TX_OFFLOAD_NOTSUP_MASK indicates, it should be the
mask of unsupported features (either not in PKT_TX_OFFLOAD_MASK or in
I40E_TX_OFFLOAD_MASK), however, xor will not get desired result here,
assume bit 0 of PKT_TX_OFFLOAD_MASK and I40E_TX_OFFLOAD_MAKS are 0 which
means corresponding feature is not supported in both sides, then we get
value of bit 0 of I40E_TX_OFFLOAD_NOTSUP_MASK which is 0 via xor, it
implies that it is supported which doesn't meet our expectation.

Correct it by a NOT-AND operation.

Fixes: 3f33e643e5 ("net/i40e: add Tx preparation")
Cc: stable@dpdk.org

Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-11-05 15:01:25 +01:00
Beilei Xing
e4845bb4a6 net/i40e: update Tx offload mask
Tx offload mask is updated in following commit:
commit 1037ed842c ("mbuf: fix Tx offload mask").

Currently, the new added offload flags are not supported in PMD
and application will fail to call PMD transmit prepare function.
This patch updates PMD Tx offload mask.

Cc: stable@dpdk.org

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-10-26 22:14:06 +02:00
Keith Wiles
81bede55e3 eal: add macro for attribute weak
eal: add shorthand __rte_weak macro
qat: update code to use __rte_weak macro
avf: update code to use __rte_weak macro
fm10k: update code to use __rte_weak macro
i40e: update code to use __rte_weak macro
ixgbe: update code to use __rte_weak macro
mlx5: update code to use __rte_weak macro
virtio: update code to use __rte_weak macro
acl: update code to use __rte_weak macro
bpf: update code to use __rte_weak macro

Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-10-25 02:11:23 +02:00
Ferruh Yigit
b2fd027389 mbuf: clarify QinQ flag usage
Update implementation that when PKT_RX_QINQ_STRIPPED mbuf ol_flags
set by PMD, PKT_RX_QINQ, PKT_RX_VLAN_STRIPPED & PKT_RX_VLAN
should be also set.

Clarify mbuf documentations that when PKT_RX_QINQ set PKT_RX_VLAN also
should be set.

So that appllication can rely on PKT_RX_QINQ flag to access both
mbuf.vlan_tci & mbuf.vlan_tci_outer

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-10-11 18:53:49 +02:00
Xiaoyun Li
c11f654042 net/i40e: add option to use latest vector path
For IA, the AVX2 vector path is only recommended to be used on later
platforms (identified by AVX512 support, like SKL etc.) This is because
performance benchmark shows downgrade when running AVX2 vector path on
early platform (BDW/HSW) in some cases. But we still observe perf gain
with some real work loading.

So this patch introduced the new devarg use-latest-supported-vec to
force the driver always selecting the latest supported vec path. Then
apps are able to take AVX2 path on early platforms. And this logic can
be re-used if we will have AVX512 vec path in future.

This patch only affects IA platforms. The selected vec path would be
like the following:
  Without devarg/devarg = 0:
  Machine	vPMD
  AVX512F	AVX2
  AVX2	SSE4.2
  SSE4.2	SSE4.2
  <SSE4.2	Not Supported

  With devarg = 1
  Machine	vPMD
  AVX512F	AVX2
  AVX2	AVX2
  SSE4.2	SSE4.2
  <SSE4.2	Not Supported

Other platforms can also apply the same logic if necessary in future.

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-09-28 01:41:01 +02:00
Ferruh Yigit
323e7b667f ethdev: make default behavior CRC strip on Rx
Removed DEV_RX_OFFLOAD_CRC_STRIP offload flag.
Without any specific Rx offload flag, default behavior by PMDs is to
strip CRC.

PMDs that support keeping CRC should advertise DEV_RX_OFFLOAD_KEEP_CRC
Rx offload capability.

Applications that require keeping CRC should check PMD capability first
and if it is supported can enable this feature by setting
DEV_RX_OFFLOAD_KEEP_CRC in Rx offload flag in rte_eth_dev_configure()

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jan Remes <remes@netcope.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2018-09-14 20:08:41 +02:00
Yanglong Wu
014a48de60 net/i40e: fix maximum frame size check
Check packet size according to TSO or no-TSO.

Fixes: bfeed0262b ("net/i40e: check illegal packets")

Signed-off-by: Yanglong Wu <yanglong.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2018-07-30 03:01:30 +02:00
Ferruh Yigit
74b7496b0c net/i40e: remove redundant queue id checks
remove queue id checks from dev_ops
i40e_dev_[rx/tx]_queue_[start/stop]
i40evf_dev_[rx/tx]_queue_[start/stop]

queue id checks already done by ethdev APIs that are calling these
dev_ops

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-07-23 23:55:26 +02:00
Shaopeng He
e062bdc6cb net/i40e: fix Tx queue setup after stop
Currently, i40e_dev_tx_queue_setup_runtime checks simple tx and treats
mbuf fast free offloading as No-simple, which is classified as simple tx
in i40e_set_tx_function_flag. This inconsistent behavior causes tx queue
setup fail after queue was stopped. This patch fixes this bug.

Fixes: 399421100e ("net/i40e: fix missing mbuf fast free offload")
Cc: stable@dpdk.org

Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-07-23 23:55:26 +02:00
Ferruh Yigit
70815c9eca ethdev: add new offload flag to keep CRC
DEV_RX_OFFLOAD_KEEP_CRC offload flag is added. PMDs that support
keeping CRC should advertise this offload capability.

DEV_RX_OFFLOAD_CRC_STRIP flag will remain one more release
default behavior in PMDs are to keep the CRC until this flag removed

Until DEV_RX_OFFLOAD_CRC_STRIP flag is removed:
- Setting both KEEP_CRC & CRC_STRIP is INVALID
- Setting only CRC_STRIP PMD should strip the CRC
- Setting only KEEP_CRC PMD should keep the CRC
- Not setting both PMD should keep the CRC

A helper function rte_eth_dev_is_keep_crc() has been added to be able to
change the no flag behavior with minimal changes in PMDs.

The PMDs that doesn't report the DEV_RX_OFFLOAD_KEEP_CRC offload can
remove rte_eth_dev_is_keep_crc() checks next release, related code
commented to help the maintenance task.

And DEV_RX_OFFLOAD_CRC_STRIP has been added to virtual drivers since
they don't use CRC at all, when an application requires this offload
virtual PMDs should not return error.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2018-07-03 01:35:58 +02:00
Yanglong Wu
bfeed0262b net/i40e: check illegal packets
Some illegal packets will lead to TX/RX hang and
can't recover automatically. This patch check those
illegal packets and protect TX/RX from hanging.

Signed-off-by: Yanglong Wu <yanglong.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2018-07-03 01:35:58 +02:00
Qi Zhang
399421100e net/i40e: fix missing mbuf fast free offload
Expose the missing mbuf fast free capability since i40 does
support it.

Fixes: 7497d3e2f7 ("net/i40e: convert to new Tx offloads API")

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:31:53 +01:00
Wei Dai
a4996bd89c ethdev: new Rx/Tx offloads API
This patch check if a input requested offloading is valid or not.
Any reuqested offloading must be supported in the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If any offloading is enabled in rte_eth_dev_configure() by application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(),
it can be enabled or disabled for individual queue in
ret_eth_[rt]x_queue_setup().
A new added offloading is the one which hasn't been enabled in
rte_eth_dev_configure() and is reuqested to be enabled in
rte_eth_[rt]x_queue_setup(), it must be per-queue type,
otherwise trigger an error log.
The underlying PMD must be aware that the requested offloadings
to PMD specific queue_setup() function only carries those
new added offloadings of per-queue type.

This patch can make above such checking in a common way in rte_ethdev
layer to avoid same checking in underlying PMD.

This patch assumes that all PMDs in 18.05-rc2 have already
converted to offload API defined in 17.11 . It also assumes
that all PMDs can return correct offloading capabilities
in rte_eth_dev_infos_get().

In the beginning of [rt]x_queue_setup() of underlying PMD,
add offloads = [rt]xconf->offloads |
dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API
defined in 17.11 to avoid upper application broken due to offload
API change.
PMD can use the info that input [rt]xconf->offloads only carry
the new added per-queue offloads to do some optimization or some
code change on base of this patch.

Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:31:51 +01:00
Qi Zhang
aed545af1b net/i40e: fix Tx queue info get
Add missing Tx queue offload assignment in i40e_txq_info_get.

Fixes: 7497d3e2f7 ("net/i40e: convert to new Tx offloads API")

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:26:36 +01:00