In Rx data path, it reads hardware registers per packet, resulting in
big performance drop. This patch improves performance from two aspects:
(1) replace per packet hardware register read by per burst.
(2) reduce hardware register read time from 3 to 2 when the low value of
time is not close to overflow.
Meanwhile, this patch refines "ice_timesync_read_rx_timestamp" and
"ice_timesync_read_tx_timestamp" API in which
"ice_tstamp_convert_32b_64b" is also used.
Fixes: 953e74e6b73a ("net/ice: enable Rx timestamp on flex descriptor")
Fixes: 646dcbe6c701 ("net/ice: support IEEE 1588 PTP")
Suggested-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch uses the index value to call the function, instead of the
function pointer assignment to save the selection of Receive Flex
Descriptor profile ID.
Otherwise the secondary process will run with wrong function address
from primary process.
Fixes: 7a340b0b4e03 ("net/ice: refactor Rx FlexiMD handling")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
After DEV_RX_OFFLOAD_JUMBO_FRAME flag removed, drivers give jumbo frame
decisions based on MTU value checks, but some of the checks were wrong
by mistake, causing device initialization to fail, fixing them.
Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
The Timestamp Overlay feature is available only in 32B Flex Descriptors.
This patch adds compile option when in 16B Flex Descriptors.
Fixes: 953e74e6b73a ("net/ice: enable Rx timestamp on flex descriptor")
Fixes: 646dcbe6c701 ("net/ice: support IEEE 1588 PTP")
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
Normally when closing the device the queue memzone should be
freed. But the memzone will be not freed, when device setup
ops like:
rte_eth_bond_slave_remove
-->__eth_bond_slave_remove_lock_free
---->slave_remove
------>rte_eth_dev_internal_reset
-------->rte_eth_dev_rx_queue_config
---------->eth_dev_rx_queue_config
------------>ice_rx_queue_release
rte_eth_dev_close
-->ice_dev_close
---->ice_free_queues
------>ice_rx_queue_release
(not been called due to nb_rx_queues and nb_tx_queues are 0)
And when queue number is changed to small size, the BIG memzone
queue index will be lost. This will lead to a memory leak. So we
should release the memzone when releasing queues.
Fixes: 460d1679586e ("drivers/net: delete HW rings while freeing queues")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Currently, most ethdev callback API use queue ID as parameter, but Rx
and Tx queue release callback use queue object which is used by Rx and
Tx burst data plane callback.
To align with other eth device queue configuration callbacks:
- queue release callbacks are changed to use queue ID
- all drivers are adapted
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add ice support for new ethdev APIs to enable/disable and read/write/adjust
IEEE1588 PTP timestamps. Currently, only scalar path supports 1588 PTP,
vector path doesn't.
The example command for running ptpclient is as below:
./build/examples/dpdk-ptpclient -c 1 -n 3 -- -T 0 -p 0x1
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Use the dynamic mbuf to register timestamp field and flag.
The ice has the feature to dump Rx timestamp value into dynamic
mbuf field by flex descriptor. This feature is turned on by dev
config "enable-rx-timestamp". Currently, it's only supported
under scalar path.
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.
This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.
Existing implementations are adjusted to follow the new semantics.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Add a specific path for RX AVX2.
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Add a specific path for TX AVX2.
In this path, support the HW offload features, like,
checksum insertion, VLAN insertion.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
The len variable, used in the computation of max_pkt_len could overflow,
if used to store the result of the following computation:
ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len
Since, we could define the mbuf size to have a large value (i.e 13312),
and ICE_SUPPORT_CHAIN_NUM is defined as 5, the computation mentioned
above could potentially result in a value which might be bigger than
MAX_USHORT.
The result will be that Jumbo Frames will not work properly
Fixes: 1b009275e2c8 ("net/ice: add Rx queue init in DCF")
Cc: stable@dpdk.org
Signed-off-by: Tudor Cornea <tudor.cornea@keysight.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The rte_eth_devices array is not in share memory, it should not be
referenced by ice_adapter which is shared by primary and secondary.
Any process set ice_adapter->eth_dev will corrupt another process'
context.
The patch removed the field "eth_dev" from ice_adapter.
Now, when the data paths try to access the rte_eth_dev_data instance,
they should replace adapter->eth_dev->data with adapter->pf.dev_data.
Fixes: f9cf4f864150 ("net/ice: support device initialization")
Cc: stable@dpdk.org
Reported-by: Yixue Wang <yixue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yixue Wang <yixue.wang@intel.com>
The flag use_avx2 and use_avx512 are defined as local variables, they
will not be aware by the secondary process, then wrong data path is
selected. Fix the issue by moving them into struct ice_adapter.
Fixes: ae60d3c9b227 ("net/ice: support Rx AVX2 vector")
Fixes: 2d5f6953d56d ("net/ice: support vector AVX2 in Tx")
Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
Cc: stable@dpdk.org
Reported-by: Yixue Wang <yixue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yixue Wang <yixue.wang@intel.com>
If vector mode is not allowed for Tx, no need to perform vector
related setup for Tx queue.
The patch deferred vector setup for Tx queue to the place that
vector mode is confirmed to be allowed.
Fixes: 28f9002ab67f ("net/ice: add Tx AVX512 offload path")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Split AVX512 Rx data path into two, one is for basic,
the other one can support additional Rx offload features,
including Rx checksum offload, Rx vlan offload, RSS offload.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
Add alternative Tx data path for AVX512 which can support partial
Tx offload features, including Tx checksum offload, vlan/QinQ
insertion offload.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
Rename PKT_RX_EIP_CKSUM_BAD to PKT_RX_OUTER_IP_CKSUM_BAD and
deprecate the original name. The new name is better aligned
with existing PKT_RX_OUTER_* flags, which should help reduce
confusion about its use.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
The `data_sz` name is fine, but it looks out of place because nothing
else has "data" prefix in that structure. Rename it to "size", as well
as add more clarity to the comments around each struct member.
Fixes: 6a17919b0e2a ("eal: change power intrinsics API")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The jumbo frame uses the 'RTE_ETHER_MAX_LEN' as boundary condition,
but the Ether overhead is larger than 18 when it supports dual VLAN tags.
That will cause the jumbo flag rx offload is wrong when MTU size is
'RTE_ETHER_MTU'.
This fix will change the boundary condition with 'RTE_ETHER_MTU' and
overhead, that perhaps impacts the cases of the jumbo frame related.
Fixes: 84dc7a95a2d3 ("net/ice: enable flow director engine")
Fixes: 1b009275e2c8 ("net/ice: add Rx queue init in DCF")
Fixes: ae2bdd0219cb ("net/ice: support MTU setting")
Fixes: 50370662b727 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Until the new eCPRI PTYPE be added into the RTE lib, just mapping eCPRI
to the PTYPE of RTE_PTYPE_L4_UDP in ice pmd.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Implement support for the power management API by implementing a
`get_monitor_addr` function that will return an address of an RX ring's
status bit.
Signed-off-by: Liang Ma <liang.j.ma@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When received tunneled packets, the testpmd output log shows 'ol_flags'
value always is 'PKT_RX_OUTER_L4_CKSUM_UNKNOWN', but expected value is
'PKT_RX_OUTER_L4_CKSUM_GOOD' or 'PKT_RX_OUTER_L4_CKSUM_BAD'.
Add the 'PKT_RX_OUTER_L4_CKSUM_GOOD' and 'PKT_RX_OUTER_L4_CKSUM_BAD' to
'flags' for normal path, 'l3_l4_flags_shuf' for AVX2 and AVX512 vector
path and 'cksum_flags' for SSE vector path to ensure that the 'ol_flags'
can match correct flags.
Fixes: dbf3c0e77a22 ("net/ice: handle Rx flex descriptor")
Fixes: 4ab7dbb0a0f6 ("net/ice: switch to Rx flexible descriptor in AVX path")
Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
Cc: stable@dpdk.org
Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If hardware outer UDP Tx checksum offload enabled, it doesn't take
effect when 'IPv6/UDP/VXLAN' packet sent with wrong outer UDP checksum.
In order to take effect, set the 'L4T_CS' flag valid only when 'L4TUNT'
equals one and 'EIPT' is not zero. If 'L4T_CS' flag marked, the hardware
can calculate the outer tunneling UDP checksum.
Fixes: bd70c451532c ("net/ice: support Tx checksum offload for tunnel")
Cc: stable@dpdk.org
Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Tested-by: Wei Xie <weix.xie@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The initialization of selecting the handler for scalar Rx path FlexiMD
fields extraction into mbuf is missed, it will cause segmentation fault
(core dumped).
Also add the missed support to handle RXDID 16, which has RSS hash value
on Qword 1.
Fixes: 7a340b0b4e03 ("net/ice: refactor Rx FlexiMD handling")
Cc: stable@dpdk.org
Reported-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add AVX512 support for ice PMD. This patch adds ice_rxtx_vec_avx512.c
to support ice AVX512 vPMD.
This patch aims to enable AVX512 on ice vPMD. Main changes are focus
on Rx path compared with AVX2 vPMD.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The hardware supports many kinds of FlexiMDs set into Rx descriptor, and
the FlexiMDs can have different offsets in the descriptor according the
DDP package setting.
The FlexiMDs type and offset are identified by the RXDID, which will be
used to setup the queue.
For expanding to support different RXDIDs in the future, refactor the Rx
FlexiMD handling by the functions mapped to related RXDIDs.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
This patch supports RxDID #22 by the following changes:
- add structure and macro definition for RxDID #22.
- support RxDID #22 format in normal path.
- change RSS hash parsing from RxDID #22 in AVX/SSE data path.
Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Convert the pre-C90-extension "C struct hack" method (using a single-
element array at the end of a structure for implementing variable-length
types) to the preferred use of C99 flexible array member.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add RXDID #25 to support Auxiliary IP Offset Rx descriptor, including
FlexiMD.4: Outer/Single IPv4 Header offset
FlexiMD.5: Outer/Single IPv6 Header offset
And parse the valid IP Offset into mbuf by flexible descriptor
section via devargs "proto_xtr" with "proto_xtr=ip_offset".
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
The two fixes are not the real root cause for MDD event, it mitigates
the failure rate when different test mode, so revert them.
Fixes: 2a0c9ae4f646 ("net/ice: fix TCP checksum offload")
Fixes: 7365a3cee51f ("net/ice: calculate TCP header size for offload")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The variables 'td_offset' and 'td_tag' should be reset to 0 for every
burst packet, otherwise the fields of Tx Descriptor will be set wrong,
this will cause the MDD event error, and Tx will hang.
Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The ice needs the exact TCP header size including options for TCP
checksum offload, but according to PKT_TX_TCP_CKSUM note, l4_len
is not required to be set, so it needs to calculate the TCP header
size if not set.
Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The L4LEN field of the Descriptor Header Offset for TCP should be the
real length including the TCP options.
Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When writing the driver's struct ice_tlan_ctx structure, do not write
the 8-bit element int_q_state with the associated internal-to-hardware
field which is 122-bits, otherwise the helper function ice_write_byte()
will use undefined behavior when setting the mask used for that write.
This should not cause any functional change and will avoid use of
undefined behavior. Also, update a comment to highlight this structure
element is not written.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The new macro __rte_cold, for compiler hinting,
is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
There is a macro __rte_always_inline, forcing functions to be inlined,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Remove function ice_clear_queues, since all equivalent code
has already been executed during ice_rx|tx_queue_stop.
Also function ice_rx|tx_queue_release_mbufs simply wrapped a
function pointer call which is not necessary, remove them.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Remove CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC with below
consideration:
1. A default Rx path can always be selected by setting a proper
rx_free_thresh value at runtime, see
ice_check_rx_burst_bulk_alloc_preconditions.
2. Its not a big deal to always reserve more space for desc ring.
"ring_size = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);"
3. Fixes a potential invalid memory access in ice_reset_rx_queue.
If CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is turned on while
ice_check_rx_burst_bulk_alloc_preconditions return fail.
Below code will have problem.
for (i = 0; i < ICE_RX_MAX_BURST; ++i)
rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
Fixes: 50370662b727 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Coverity complains of unchecked return value warning of
ice_xmit_cleanup, while this cleanup is opportunistic and will not cause
problems if it fails. So instead of checking the return value of
ice_xmit_cleanup and return in case of cleanup failure, we directly cast
it to void function to make the Coverity happy.
Coverity issue: 353623
Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Sunil Pai G <sunil.pai.g@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>