95 Commits

Author SHA1 Message Date
Simei Su
f9c561ffbc net/ice: fix performance for Rx timestamp
In Rx data path, it reads hardware registers per packet, resulting in
big performance drop. This patch improves performance from two aspects:
(1) replace per packet hardware register read by per burst.
(2) reduce hardware register read time from 3 to 2 when the low value of
    time is not close to overflow.

Meanwhile, this patch refines "ice_timesync_read_rx_timestamp" and
"ice_timesync_read_tx_timestamp" API in which
"ice_tstamp_convert_32b_64b" is also used.

Fixes: 953e74e6b73a ("net/ice: enable Rx timestamp on flex descriptor")
Fixes: 646dcbe6c701 ("net/ice: support IEEE 1588 PTP")

Suggested-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-01 02:21:10 +01:00
Dapeng Yu
20b631efe7 net/ice: fix function pointer in multi-process
This patch uses the index value to call the function, instead of the
function pointer assignment to save the selection of Receive Flex
Descriptor profile ID.

Otherwise the secondary process will run with wrong function address
from primary process.

Fixes: 7a340b0b4e03 ("net/ice: refactor Rx FlexiMD handling")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-27 05:29:39 +02:00
Olivier Matz
daa02b5cdd mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-24 13:37:43 +02:00
Ferruh Yigit
295968d174 ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-22 18:15:38 +02:00
Ferruh Yigit
ede6356582 drivers/net: fix removing jumbo offload flag
After DEV_RX_OFFLOAD_JUMBO_FRAME flag removed, drivers give jumbo frame
decisions based on MTU value checks, but some of the checks were wrong
by mistake, causing device initialization to fail, fixing them.

Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
2021-10-22 17:44:18 +02:00
Ferruh Yigit
b563c14212 ethdev: remove jumbo offload flag
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.

And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.

Removing this additional configuration for simplification.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2021-10-18 19:20:21 +02:00
Ferruh Yigit
1bb4a528c4 ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.

Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.

These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.

Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
  'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
  Ethernet frame overhead, and this overhead may be different from
  device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
  which adds additional confusion and some APIs and PMDs already
  discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
  field, this adds configuration complexity for application.

As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.

For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.

When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.

Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 19:20:20 +02:00
Simei Su
250e2ed8d8 net/ice: fix build when Rx descriptor size is 16
The Timestamp Overlay feature is available only in 32B Flex Descriptors.
This patch adds compile option when in 16B Flex Descriptors.

Fixes: 953e74e6b73a ("net/ice: enable Rx timestamp on flex descriptor")
Fixes: 646dcbe6c701 ("net/ice: support IEEE 1588 PTP")

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-13 12:33:17 +02:00
Konstantin Ananyev
8d7d4fcdca ethdev: change input parameters for Rx queue count
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
2021-10-13 22:14:58 +02:00
Yunjian Wang
d3778bf39a net/ice: fix memzone leak on queue re-configure
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:

rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ice_rx_queue_release
rte_eth_dev_close
-->ice_dev_close
---->ice_free_queues
------>ice_rx_queue_release
      (not been called due to nb_rx_queues and nb_tx_queues are 0)

And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.

Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-07 13:38:16 +02:00
Xueming Li
7483341ae5 ethdev: change queue release callback
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.

To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-06 19:16:03 +02:00
Simei Su
646dcbe6c7 net/ice: support IEEE 1588 PTP
Add ice support for new ethdev APIs to enable/disable and read/write/adjust
IEEE1588 PTP timestamps. Currently, only scalar path supports 1588 PTP,
vector path doesn't.

The example command for running ptpclient is as below:
./build/examples/dpdk-ptpclient -c 1 -n 3 -- -T 0 -p 0x1

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-09-28 13:13:52 +02:00
Simei Su
953e74e6b7 net/ice: enable Rx timestamp on flex descriptor
Use the dynamic mbuf to register timestamp field and flag.
The ice has the feature to dump Rx timestamp value into dynamic
mbuf field by flex descriptor. This feature is turned on by dev
config "enable-rx-timestamp". Currently, it's only supported
under scalar path.

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-09-28 05:33:41 +02:00
Anatoly Burakov
6afc4baf4f eal: use callbacks for power monitoring comparison
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.

This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.

Existing implementations are adjusted to follow the new semantics.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
2021-07-09 21:13:13 +02:00
Wenzhuo Lu
214f452f7d net/ice: add AVX2 offload Rx
Add a specific path for RX AVX2.
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.

'inline' is used, then the duplicate code is generated
by the compiler.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
2021-07-06 04:57:53 +02:00
Wenzhuo Lu
52ccdcf2fd net/ice: add AVX2 offload Tx
Add a specific path for TX AVX2.
In this path, support the HW offload features, like,
checksum insertion, VLAN insertion.
This path is chosen automatically according to the
configuration.

'inline' is used, then the duplicate code is generated
by the compiler.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
2021-07-06 04:57:33 +02:00
Tudor Cornea
58bb86cf13 net/ice: fix overflow in maximum packet length config
The len variable, used in the computation of max_pkt_len could overflow,
if used to store the result of the following computation:

ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len

Since, we could define the mbuf size to have a large value (i.e 13312),
and ICE_SUPPORT_CHAIN_NUM is defined as 5, the computation mentioned
above could potentially result in a value which might be bigger than
MAX_USHORT.

The result will be that Jumbo Frames will not work properly

Fixes: 1b009275e2c8 ("net/ice: add Rx queue init in DCF")
Cc: stable@dpdk.org

Signed-off-by: Tudor Cornea <tudor.cornea@keysight.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-07-04 16:57:12 +02:00
Qi Zhang
45f6a19f65 net/ice: fix data path in secondary process
The rte_eth_devices array is not in share memory, it should not be
referenced by ice_adapter which is shared by primary and secondary.
Any process set ice_adapter->eth_dev will corrupt another process'
context.

The patch removed the field "eth_dev" from ice_adapter.
Now, when the data paths try to access the rte_eth_dev_data instance,
they should replace adapter->eth_dev->data with adapter->pf.dev_data.

Fixes: f9cf4f864150 ("net/ice: support device initialization")
Cc: stable@dpdk.org

Reported-by: Yixue Wang <yixue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yixue Wang <yixue.wang@intel.com>
2021-06-10 12:04:16 +02:00
Qi Zhang
f8b3326f31 net/ice: fix data path selection in secondary process
The flag use_avx2 and use_avx512 are defined as local variables, they
will not be aware by the secondary process, then wrong data path is
selected. Fix the issue by moving them into struct ice_adapter.

Fixes: ae60d3c9b227 ("net/ice: support Rx AVX2 vector")
Fixes: 2d5f6953d56d ("net/ice: support vector AVX2 in Tx")
Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
Cc: stable@dpdk.org

Reported-by: Yixue Wang <yixue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yixue Wang <yixue.wang@intel.com>
2021-06-10 12:04:16 +02:00
Alvin Zhang
34ca5367d7 net/ice: fix Tx queue vector setup
If vector mode is not allowed for Tx, no need to perform vector
related setup for Tx queue.

The patch deferred vector setup for Tx queue to the place that
vector mode is confirmed to be allowed.

Fixes: 28f9002ab67f ("net/ice: add Tx AVX512 offload path")

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-05-10 04:58:55 +02:00
Leyi Rong
808a17b3c1 net/ice: add Rx AVX512 offload path
Split AVX512 Rx data path into two, one is for basic,
the other one can support additional Rx offload features,
including Rx checksum offload, Rx vlan offload, RSS offload.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
2021-04-16 12:44:27 +02:00
Leyi Rong
28f9002ab6 net/ice: add Tx AVX512 offload path
Add alternative Tx data path for AVX512 which can support partial
Tx offload features, including Tx checksum offload, vlan/QinQ
insertion offload.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
2021-04-16 12:43:49 +02:00
Qi Zhang
56175f74ea net/ice: refine debug build option
1. replace RTE_LIBRTE_ICE_DEBUG_RX with RTE_ETHDEV_DEBUG_RX.
2. replace RTE_LIBRTE_ICE_DEBUG_TX with RTE_ETHDEV_DEBUG_TX.
3. merge RTE_LIBRTE_ICE_DEBUG_TX_FREE and RTE_LIBRTE_ETHDEV_DEBUG
   into RTE_LIBRTE_DEBUG_TX

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-04-01 16:10:20 +02:00
Lance Richardson
e8a419d6de mbuf: rename outer IP checksum macro
Rename PKT_RX_EIP_CKSUM_BAD to PKT_RX_OUTER_IP_CKSUM_BAD and
deprecate the original name. The new name is better aligned
with existing PKT_RX_OUTER_* flags, which should help reduce
confusion about its use.

Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-03-02 10:57:28 +01:00
Bruce Richardson
df96fd0d73 ethdev: make driver-only headers private
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
2021-01-29 20:59:09 +01:00
Anatoly Burakov
f400ea0b4c eal: rename power monitor condition member
The `data_sz` name is fine, but it looks out of place because nothing
else has "data" prefix in that structure. Rename it to "size", as well
as add more clarity to the comments around each struct member.

Fixes: 6a17919b0e2a ("eal: change power intrinsics API")

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-01-29 15:29:48 +01:00
Steve Yang
14801aa3fb net/ice: fix jumbo frame flag condition
The jumbo frame uses the 'RTE_ETHER_MAX_LEN' as boundary condition,
but the Ether overhead is larger than 18 when it supports dual VLAN tags.
That will cause the jumbo flag rx offload is wrong when MTU size is
'RTE_ETHER_MTU'.

This fix will change the boundary condition with 'RTE_ETHER_MTU' and
overhead, that perhaps impacts the cases of the jumbo frame related.

Fixes: 84dc7a95a2d3 ("net/ice: enable flow director engine")
Fixes: 1b009275e2c8 ("net/ice: add Rx queue init in DCF")
Fixes: ae2bdd0219cb ("net/ice: support MTU setting")
Fixes: 50370662b727 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org

Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-01-19 03:30:15 +01:00
Jeff Guo
1d37eac2f9 net/ice: add PTYPE mapping for eCPRI
Until the new eCPRI PTYPE be added into the RTE lib, just mapping eCPRI
to the PTYPE of RTE_PTYPE_L4_UDP in ice pmd.

Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-01-19 03:30:14 +01:00
Liang Ma
dc8ac7980c net/ice: implement power management API
Implement support for the power management API by implementing a
`get_monitor_addr` function that will return an address of an RX ring's
status bit.

Signed-off-by: Liang Ma <liang.j.ma@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2021-01-19 00:00:30 +01:00
Murphy Yang
75c6287f25 net/ice: fix outer checksum flags
When received tunneled packets, the testpmd output log shows 'ol_flags'
value always is 'PKT_RX_OUTER_L4_CKSUM_UNKNOWN', but expected value is
'PKT_RX_OUTER_L4_CKSUM_GOOD' or 'PKT_RX_OUTER_L4_CKSUM_BAD'.

Add the 'PKT_RX_OUTER_L4_CKSUM_GOOD' and 'PKT_RX_OUTER_L4_CKSUM_BAD' to
'flags' for normal path, 'l3_l4_flags_shuf' for AVX2 and AVX512 vector
path and 'cksum_flags' for SSE vector path to ensure that the 'ol_flags'
can match correct flags.

Fixes: dbf3c0e77a22 ("net/ice: handle Rx flex descriptor")
Fixes: 4ab7dbb0a0f6 ("net/ice: switch to Rx flexible descriptor in AVX path")
Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
Cc: stable@dpdk.org

Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-01-08 16:03:05 +01:00
Murphy Yang
2ed0117763 net/ice: fix outer UDP Tx checksum offload
If hardware outer UDP Tx checksum offload enabled, it doesn't take
effect when 'IPv6/UDP/VXLAN' packet sent with wrong outer UDP checksum.

In order to take effect, set the 'L4T_CS' flag valid only when 'L4TUNT'
equals one and 'EIPT' is not zero. If 'L4T_CS' flag marked, the hardware
can calculate the outer tunneling UDP checksum.

Fixes: bd70c451532c ("net/ice: support Tx checksum offload for tunnel")
Cc: stable@dpdk.org

Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Tested-by: Wei Xie <weix.xie@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-01-08 16:03:04 +01:00
Haiyue Wang
98a181ed86 net/ice: fix DCF crash on Rx
The initialization of selecting the handler for scalar Rx path FlexiMD
fields extraction into mbuf is missed, it will cause segmentation fault
(core dumped).

Also add the missed support to handle RXDID 16, which has RSS hash value
on Qword 1.

Fixes: 7a340b0b4e03 ("net/ice: refactor Rx FlexiMD handling")
Cc: stable@dpdk.org

Reported-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:35:02 +01:00
Leyi Rong
7f85d5ebcf net/ice: add AVX512 vector path
Add AVX512 support for ice PMD. This patch adds ice_rxtx_vec_avx512.c
to support ice AVX512 vPMD.

This patch aims to enable AVX512 on ice vPMD. Main changes are focus
on Rx path compared with AVX2 vPMD.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:26 +01:00
Ciara Power
c3d9ed69ff net/ice: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-19 16:45:02 +02:00
Radu Nicolau
ad6f7399d2 net/ice: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2020-10-13 14:42:02 +02:00
Haiyue Wang
7a340b0b4e net/ice: refactor Rx FlexiMD handling
The hardware supports many kinds of FlexiMDs set into Rx descriptor, and
the FlexiMDs can have different offsets in the descriptor according the
DDP package setting.

The FlexiMDs type and offset are identified by the RXDID, which will be
used to setup the queue.

For expanding to support different RXDIDs in the future, refactor the Rx
FlexiMD handling by the functions mapped to related RXDIDs.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
2020-09-30 19:19:09 +02:00
Junyu Jiang
12443386a0 net/ice: support flex Rx descriptor RxDID22
This patch supports RxDID #22 by the following changes:
- add structure and macro definition for RxDID #22.
- support RxDID #22 format in normal path.
- change RSS hash parsing from RxDID #22 in AVX/SSE data path.

Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
2020-09-18 18:55:11 +02:00
Qi Zhang
170d34ac43 net/ice/base: replace single-element array hack
Convert the pre-C90-extension "C struct hack" method (using a single-
element array at the end of a structure for implementing variable-length
types) to the preferred use of C99 flexible array member.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2020-09-18 18:55:09 +02:00
Junfeng Guo
6982f1848f net/ice: support auxiliary IP offset Rx descriptor
Add RXDID #25 to support Auxiliary IP Offset Rx descriptor, including
	FlexiMD.4: Outer/Single IPv4 Header offset
	FlexiMD.5: Outer/Single IPv6 Header offset
And parse the valid IP Offset into mbuf by flexible descriptor
section via devargs "proto_xtr" with "proto_xtr=ip_offset".

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2020-09-18 18:55:06 +02:00
Haiyue Wang
4339ea2979 net/ice: revert fake TSO fixes
The two fixes are not the real root cause for MDD event, it mitigates
the failure rate when different test mode, so revert them.

Fixes: 2a0c9ae4f646 ("net/ice: fix TCP checksum offload")
Fixes: 7365a3cee51f ("net/ice: calculate TCP header size for offload")
Cc: stable@dpdk.org

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-08-05 19:23:33 +02:00
Haiyue Wang
8a72edd9cb net/ice: fix Tx hang with TSO
The variables 'td_offset' and 'td_tag' should be reset to 0 for every
burst packet, otherwise the fields of Tx Descriptor will be set wrong,
this will cause the MDD event error, and Tx will hang.

Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-08-05 19:23:31 +02:00
Haiyue Wang
7365a3cee5 net/ice: calculate TCP header size for offload
The ice needs the exact TCP header size including options for TCP
checksum offload, but according to PKT_TX_TCP_CKSUM note, l4_len
is not required to be set, so it needs to calculate the TCP header
size if not set.

Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2020-07-30 00:41:24 +02:00
Haiyue Wang
2a0c9ae4f6 net/ice: fix TCP checksum offload
The L4LEN field of the Descriptor Header Offset for TCP should be the
real length including the TCP options.

Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-07-30 00:41:23 +02:00
Renata Saiakhova
460d167958 drivers/net: delete HW rings while freeing queues
Delete memzones for HW rings in igb and ixgbe while freeing queues

Updated igb, ixgbe, i40e, ice & em drivers.

Signed-off-by: Renata Saiakhova <renata.saiakhova@ekinops.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-07-11 06:18:54 +02:00
Qi Zhang
135ccbc6a7 net/ice/base: avoid undefined behavior
When writing the driver's struct ice_tlan_ctx structure, do not write
the 8-bit element int_q_state with the associated internal-to-hardware
field which is 122-bits, otherwise the helper function ice_write_byte()
will use undefined behavior when setting the mask used for that write.
This should not cause any functional change and will avoid use of
undefined behavior.  Also, update a comment to highlight this structure
element is not written.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2020-06-16 19:21:07 +02:00
Thomas Monjalon
ce6427ddca replace cold attributes
The new macro __rte_cold, for compiler hinting,
is now used where appropriate for consistency.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
2020-04-16 18:30:58 +02:00
Thomas Monjalon
33011cb3df replace always-inline attributes
There is a macro __rte_always_inline, forcing functions to be inlined,
which is now used where appropriate for consistency.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-04-16 18:16:46 +02:00
Qi Zhang
edec6dd838 net/ice: remove redundant functions
Remove function ice_clear_queues, since all equivalent code
has already been executed during ice_rx|tx_queue_stop.

Also function ice_rx|tx_queue_release_mbufs simply wrapped a
function pointer call which is not necessary, remove them.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-03-18 10:21:42 +01:00
Qi Zhang
af3f83032b net/ice: remove bulk alloc option
Remove CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC with below
consideration:

1. A default Rx path can always be selected by setting a proper
   rx_free_thresh value at runtime, see
   ice_check_rx_burst_bulk_alloc_preconditions.

2. Its not a big deal to always reserve more space for desc ring.
   "ring_size = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);"

3. Fixes a potential invalid memory access in ice_reset_rx_queue.
   If CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is turned on while
   ice_check_rx_burst_bulk_alloc_preconditions return fail.
   Below code will have problem.

   for (i = 0; i < ICE_RX_MAX_BURST; ++i)
   	rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;

Fixes: 50370662b727 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-03-18 10:21:41 +01:00
Sunil Pai G
fe38133c42 net/ice: fix unchecked Tx cleanup error
Coverity complains of unchecked return value warning of
ice_xmit_cleanup, while this cleanup is opportunistic and will not cause
problems if it fails. So instead of checking the return value of
ice_xmit_cleanup and return in case of cleanup failure, we directly cast
it to void function to make the Coverity happy.

Coverity issue: 353623
Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Sunil Pai G <sunil.pai.g@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-02-14 12:42:13 +01:00