The driver is incorrectly setting the RSS field in the last mbuf in
the packet chain instead of the first. Moreover, the last mbuf might
have already been freed if it only contained the Ethernet CRC.
Also, fix the call to i40e_rxd_build_fdir to store the fdir flags in
the first mbuf of the chain instead of the last.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: 5a21d9715f ("i40e: report flow director matching")
Signed-off-by: Dumitru Ceara <dumitru.ceara@gmail.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch fixes the issues reported by Coverity of 'Dereference
before null check', by deleting unnecessary null checks, or moving
null checks to before the offending use of the pointer.
Coverity issue: 13298, 13299, 13294, 13301, 119267
Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Following the discussions from:
http://dpdk.org/ml/archives/dev/2015-July/021721.htmlhttp://dpdk.org/ml/archives/dev/2016-April/038143.html
The value of these flags is 0, making them useless. Today, no example
application checks them on Rx, and only few drivers sets them and
silently give wrong packets to the application, which should not happen.
This patch removes the unused flags from rte_mbuf and their use in the
drivers. The i40e and fm10k are kept as they are today and should be
fixed to drop bad packets. The enic driver is managed by its maintainer
in another patch.
Fixes: c22265f6 ("mbuf: add new packet flags for i40e")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
NSH packet can be recognized by Intel X710/XL710 series.
This patch enables the new packet type.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.
Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:
PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
is enabled in the RX configuration of the PMD.
For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.
This patch also updates the drivers. For PKT_RX_VLAN_PKT:
- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.
For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.
[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Many drivers provide their own implementation of rte_mbuf_raw_alloc(),
duplicating the code. Introduce a new public function in rte_mbuf to
allocate a raw mbuf (uninitialized).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Previously, for tunnel packets, such as VXLAN/NVGRE, the vlan
tags of the inner header will be stripped without putting vlan
info to descriptor, what is not expected behaviour.
This patch fixes it by changing hardware configuration to leave
the inner packet alone.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Issue:
when using the following CLI in testpmd to enable ipv6 TSO feature
(set --txqflags=0 in the testpmd command)
set verbose 1
csum set ip hw 0
csum set udp hw 0
csum set tcp hw 0
csum set sctp hw 0
csum set outer-ip hw 0
csum parse_tunnel on 0
tso set 800 0
set fwd csum
start
We will not get what we want, the ipv6 packets sent out from IXIA can be
received by i40e, but cannot forward to another port.
The root cause is when HW doing the TSO offload for packets, it does not only
depends on the context descriptor to define the MSS and TSO payload size, it
also need to know whether this packets is ipv4 or ipv6, we use
i40e_txd_enable_checksum to generate the related fields for data descriptor.
But PMD will not call i40e_txd_enable_checksum if only the TSO offload flag is
set. The reason why ipv4 works fine for TSO in testpmd csum mode is csum engine
will set the ip csum flag when the packet is ipv4 and TSO is enabled but
will not set the flag for ipv6 and this flag will cause the
i40e_txd_enable_checksum to be invoked. For both the cases the TSO flag will be
set, so we need to use TSO flag to trigger the i40e_txd_enable_checksum.
The right logic here is we enable csum offload for both ipv4 and ipv6 when TSO
flag is set.
Fixes: e3f0151f ("i40e: enable Tx checksum only for offloaded packets")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Issue:
When CONFIG_RTE_LIBTRE_I40E_RX_ALLOW_BULK_ALLOC=n in config file, there
will be a build error:
'i40e_recv_pkts_bulk_alloc' undeclared
Now DPDK i40e PMD uses the preprocessor to choose whether or not to define
the bulk recv functions, but for selection of the RX function, PMD only
depends on a C variable. This causes the inconsistency and leads to the
build error due to the bulk recv function not being defined.
Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Add a new API rte_eth_dev_get_supported_ptypes to query what packet types
can be filled by a given device. The device should be already started or
its PMD RX burst function already decided, since the packet types supported
may vary depending on RX function.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Add new Device ID's for backplane and QSFP+ adapters, and delete
deprecated one for backplane.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The no-refcount path was being taken without the application opting
in to it.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Reported-by: Mike Stolarchuk <mike.stolarchuk@bigswitch.com>
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The array 'ptype_table' was defined in depth of 'UINT8_MAX' which
is 255, while the querying index could be from 0 to 255. The issue
can be fixed with expanding the array to one more element.
Fixes: 9571ea0284 ("i40e: replace some offload flags with unified packet type")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Macros RTE_MBUF_DATA_DMA_ADDR and RTE_MBUF_DATA_DMA_ADDR_DEFAULT
are defined in each PMD driver file. Convert macros to inline
functions and move them to common lib/librte_mbuf/rte_mbuf.h file.
PMD drivers include rte_mbuf.h file directly/indirectly hence no
additioanl header file inclusion is necessary.
Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
fix the error reported by checkpatch:
"ERROR: return is not a function, parentheses are not required"
remove parentheses in return like:
"return (logical expressions)"
remove parentheses in return a function like:
"return (rte_mempool_lookup(...))"
Fixes: 6307b909b8 ("lib: remove extra parenthesis after return")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
This is one of those trivial things git and other tools complain
about.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
There is a new function in the EAL API for internal use.
It has neither a proper prefix nor a .map export:
libethdev.so: undefined reference to `is_xen_dom0_supported'
Fixes: 719dbebceb ("xen: allow determining DOM0 at runtime")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Following the same approach taken with dev_started field
in rte_eth_dev_data structure, this patch adds two new fields
in it, rx_queue_state and tx_queue_state arrays, which track
which queues have been started and which not.
This is important to avoid trying to start/stop twice a queue,
which will result in undefined behaviour
(which may cause RX/TX disruption).
Mind that only the PMDs which have queue_start/stop functions
have been changed to update this field, as the functions will
check the queue state before switching it.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
According to XL710 datasheet:
RX QLEN restrictions: When the PXE_MODE flag in the GLLAN_RCTL_0
register is cleared, the QLEN must be whole number of 32
descriptors.
TX QLEN restrictions: When the PXE_MODE flag in the GLLAN_RCTL_0
register is cleared, the QLEN must be whole number of 32
descriptors.
So make sure that for both RX and TX queues number of HW descriptors is
a multiple of 32.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch enables DCB feature on Intel XL710/X710 NICs. It includes:
Receive queue classification based on traffic class
Round Robin ETS schedule (rx and tx)
Priority flow control
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
To support FVL PMD can select which RX and TX function should be used
according to the queue config.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
To support the multiple segments in one packets when the received pkts exceed
one buffer size.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
The way to increase the performance of the vPMD TX is to use some fast mbuf
release method compares to the scalar TX.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
The vPMD RX function uses the multi-buffer and SSE instructions to
accelerate the RX speed, but now the pktype cannot be supported by the vPMD RX,
because it will decrease the performance heavily.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
The extended unified packet type is now part of the standard ABI.
As mbuf struct is changed, the mbuf library version is incremented.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
The i40e_xmit_pkts() is called, which often means HW offload is used here,
so we had better remove 'unlikely' check for checksum offload.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Marvin Liu <yong.liu@intel.com>
Fixes issue where ieee15888 timestamping doesn't work for the i40e
pmd when RTE_ABI_NEXT is enabled.
Also refactors repeated ieee15888 flag checking and setting
code into a function.
Test report: http://dpdk.org/ml/archives/dev/2015-August/022496.html
Reported-by: Huilong Xu <huilongx.xu@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Huilong Xu <huilongx.xu@intel.com>
Header buffer address for header split will be filled with the
physical address for DMA, which is actually not needed at all,
as header split hasn't been supported. Hardware requires the
least bit of header address which is 'Descriptor Done' bit when
write back should be set to 0 by driver.
The issue is that if the user wants to reserve an odd number of
bytes between the mbuf header and data buffer, the physical address
to be filled in the descriptor would happen to be odd. That means
the DD bit would be set to non-zero by driver. That will result in
reporting descriptor done wrongly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Add i40e_dev_free_queues() function and call it from close() functions.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
There is a segmentation fault if rxq is NULL.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
If a descriptor the device drive is handling is the context descriptor,
its type value will be 0x1.
When using the not operator ! to do the conditional check, if the expression
value is zero, the device driver will consider the transaction for this
descriptor has been completed, even its DD field is still 0x1 which means
NIC has not finished the operation on this descriptor.
Use the 0xF to check the DD status to avoid the above issue happens.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: 05999aab4c ("i40e: add or delete flow director")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Add ixgbe support for new ethdev APIs to enable and read IEEE1588/
802.1AS PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
It configures specific registers to enable double vlan stripping
on RX side and insertion on TX side.
The RX descriptors will be parsed, the vlan tags and flags will be
saved to corresponding mbuf fields if vlan tag is detected.
The TX descriptors will be configured according to the
configurations in mbufs, to trigger the hardware insertion of
double vlan tags for each packets sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The parameter tx_free_thresh is not consistent between the drivers:
some use it as rte_eth_tx_burst() requires, some release buffers when
the number of free descriptors drop below this value.
Let's use it as most fast-path code does, which is the latter, and update
comments throughout the code to reflect that.
Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
It wouldn't check the configured maximum packet length, and then
the scattered receiving function wouldn't be selected at all even
if it wants to receive a jumbo frame. The fix is to select the
correct RX function according to the configurations.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Move i40e PMD to drivers/net directory.
As part of the move, rename the "i40e" directory, containing the "base
driver" code, from "i40e" to "base".
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>