Rename PKT_RX_EIP_CKSUM_BAD to PKT_RX_OUTER_IP_CKSUM_BAD and
deprecate the original name. The new name is better aligned
with existing PKT_RX_OUTER_* flags, which should help reduce
confusion about its use.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
rte_gso_segment decreased refcnt of pkt by one, but
it is wrong if pkt is external mbuf, pkt won't be
freed because of incorrect refcnt, the result is
application can't allocate mbuf from mempool because
mbufs in mempool are run out of.
One correct way is application should call
rte_pktmbuf_free after calling rte_gso_segment to free
pkt explicitly. rte_gso_segment must not handle it, this
should be responsibility of application.
This commit changed rte_gso_segment in functional behavior
and return value, so the application must take appropriate
actions according to return values, "ret < 0" means it
should free and drop 'pkt', "ret == 0" means 'pkt' isn't
GSOed but 'pkt' can be transmitted as a normal packet,
"ret > 0" means 'pkt' has been GSOed into two or multiple
segments, it should use "pkts_out" to transmit these
segments. The application must free 'pkt' after call
rte_gso_segment when return value isn't equal to 0.
Fixes: 119583797b ("gso: support TCP/IPv4 GSO")
Cc: stable@dpdk.org
Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add a function to calculate the length of an IPv4 header as suggested
on the mailing list [1]. Call where appropriate.
[1] https://mails.dpdk.org/archives/dev/2020-October/184471.html
Suggested-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Michael Pfeiffer <michael.pfeiffer@tu-ilmenau.de>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This is a cleanup commit.
It assembles all tunnel outer updates into one function call to avoid
code duplications.
It defines RTE_VXLAN_GPE_DEFAULT_PORT (4790) in accordance with all
other tunnel protocol definitions.
It replaces all numeric values 4789 in their corresponding definition
RTE_VXLAN_GPE_DEFAULT_PORT.
It updates the 'csum parse-tunnel' documentation.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
GENEVE is a widely used tunneling protocol in modern Virtualized
Networks. testpmd already supports parsing of several tunneling
protocols including VXLAN, VXLAN-GPE, GRE. This commit adds GENEVE
parsing of inner protocols (IPv4-0x0800, IPv6-0x86dd, Ethernet-0x6558)
based on IETF draft-ietf-nvo3-geneve-09. GENEVE is considered more
flexible than the other protocols. In terms of protocol format GENEVE
header has a variable length options as opposed to other tunneling
protocols which have a fixed header size.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The number of empty polls provides information about available
CPU head room in the presence of continuous polling.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Phil Yang <phil.yang@arm.com>
Tested-by: Ali Alnubani <alialnu@mellanox.com>
When having QinQ VLAN headers in the packet, parse_ethernet
is capable of parsing only the first VLAN.
Add parsing for QinQ VLAN headers in the packet.
Fixes: 51f694dd40 ("app/testpmd: rework checksum forward engine")
Cc: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
There is a common macro __rte_packed for packing structs,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
This patch fixed the TX checksum value problem when enabled TSO in
tunnel packets, because outer UDP checksum calculation depend on
the TSO configuration.
Fixes: 0f62d63593 ("app/testpmd: support tunneled TSO in checksum engine")
Cc: stable@dpdk.org
Signed-off-by: Peng Huang <peng.huang@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
The VXLAN related definitions and structures are moved from
rte_ether.h to a new header file: rte_xvlan.h.
Also introducing a new define macro for VXLAN default port id:
RTE_VXLAN_DEFAULT_PORT
Signed-off-by: Flavia Musatescu <flavia.musatescu@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Raslan Darawsheh <rasland@mellanox.com>
Enable testpmd to forward GTP packet in csum fwd mode.
A GTP header structure (without optional fields and extension header)
is defined in new rte_gtp.h.
A parser function in testpmd is added. GTPU and GTPC packets are both
supported, with respective UDP destination port and GTP message type.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Since we change these macros, we might as well avoid triggering complaints
from checkpatch because of mixed case.
old=RTE_IPv4
new=RTE_IPV4
git grep -lw $old | xargs sed -i -e "s/\<$old\>/$new/g"
old=RTE_ETHER_TYPE_IPv4
new=RTE_ETHER_TYPE_IPV4
git grep -lw $old | xargs sed -i -e "s/\<$old\>/$new/g"
old=RTE_ETHER_TYPE_IPv6
new=RTE_ETHER_TYPE_IPV6
git grep -lw $old | xargs sed -i -e "s/\<$old\>/$new/g"
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Add 'RTE_' prefix to defines:
- rename ETHER_ADDR_LEN as RTE_ETHER_ADDR_LEN.
- rename ETHER_TYPE_LEN as RTE_ETHER_TYPE_LEN.
- rename ETHER_CRC_LEN as RTE_ETHER_CRC_LEN.
- rename ETHER_HDR_LEN as RTE_ETHER_HDR_LEN.
- rename ETHER_MIN_LEN as RTE_ETHER_MIN_LEN.
- rename ETHER_MAX_LEN as RTE_ETHER_MAX_LEN.
- rename ETHER_MTU as RTE_ETHER_MTU.
- rename ETHER_MAX_VLAN_FRAME_LEN as RTE_ETHER_MAX_VLAN_FRAME_LEN.
- rename ETHER_MAX_VLAN_ID as RTE_ETHER_MAX_VLAN_ID.
- rename ETHER_MAX_JUMBO_FRAME_LEN as RTE_ETHER_MAX_JUMBO_FRAME_LEN.
- rename ETHER_MIN_MTU as RTE_ETHER_MIN_MTU.
- rename ETHER_LOCAL_ADMIN_ADDR as RTE_ETHER_LOCAL_ADMIN_ADDR.
- rename ETHER_GROUP_ADDR as RTE_ETHER_GROUP_ADDR.
- rename ETHER_TYPE_IPv4 as RTE_ETHER_TYPE_IPv4.
- rename ETHER_TYPE_IPv6 as RTE_ETHER_TYPE_IPv6.
- rename ETHER_TYPE_ARP as RTE_ETHER_TYPE_ARP.
- rename ETHER_TYPE_VLAN as RTE_ETHER_TYPE_VLAN.
- rename ETHER_TYPE_RARP as RTE_ETHER_TYPE_RARP.
- rename ETHER_TYPE_QINQ as RTE_ETHER_TYPE_QINQ.
- rename ETHER_TYPE_ETAG as RTE_ETHER_TYPE_ETAG.
- rename ETHER_TYPE_1588 as RTE_ETHER_TYPE_1588.
- rename ETHER_TYPE_SLOW as RTE_ETHER_TYPE_SLOW.
- rename ETHER_TYPE_TEB as RTE_ETHER_TYPE_TEB.
- rename ETHER_TYPE_LLDP as RTE_ETHER_TYPE_LLDP.
- rename ETHER_TYPE_MPLS as RTE_ETHER_TYPE_MPLS.
- rename ETHER_TYPE_MPLSM as RTE_ETHER_TYPE_MPLSM.
- rename ETHER_VXLAN_HLEN as RTE_ETHER_VXLAN_HLEN.
- rename ETHER_ADDR_FMT_SIZE as RTE_ETHER_ADDR_FMT_SIZE.
- rename VXLAN_GPE_TYPE_IPV4 as RTE_VXLAN_GPE_TYPE_IPV4.
- rename VXLAN_GPE_TYPE_IPV6 as RTE_VXLAN_GPE_TYPE_IPV6.
- rename VXLAN_GPE_TYPE_ETH as RTE_VXLAN_GPE_TYPE_ETH.
- rename VXLAN_GPE_TYPE_NSH as RTE_VXLAN_GPE_TYPE_NSH.
- rename VXLAN_GPE_TYPE_MPLS as RTE_VXLAN_GPE_TYPE_MPLS.
- rename VXLAN_GPE_TYPE_GBP as RTE_VXLAN_GPE_TYPE_GBP.
- rename VXLAN_GPE_TYPE_VBNG as RTE_VXLAN_GPE_TYPE_VBNG.
- rename ETHER_VXLAN_GPE_HLEN as RTE_ETHER_VXLAN_GPE_HLEN.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add 'rte_' prefix to structures:
- rename struct ether_addr as struct rte_ether_addr.
- rename struct ether_hdr as struct rte_ether_hdr.
- rename struct vlan_hdr as struct rte_vlan_hdr.
- rename struct vxlan_hdr as struct rte_vxlan_hdr.
- rename struct vxlan_gpe_hdr as struct rte_vxlan_gpe_hdr.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
testpmd only sets the L4 len in case of TCP packets.
some PMD's like tap rely on mbuf meta data to calculate csum
This will set the L4 len for UDP packets same as TCP
Fixes: 160c3dc945 ("app/testpmd: introduce IP parsing functions in csum fwd engine")
CC: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Collect and prints the statistics for PKT_RX_EL4_CKSUM_BAD
errors.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Added outer-udp Tx HW checksum support for csum forward engine
if device supports DEV_TX_OFFLOAD_OUTER_UDP_CKSUM.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
This patch enables GSO for UDP/IPv4 packets. Oversized UDP/IPv4
packets transmitted over a GSO-enabled port will undergo segmentation.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Tested-by: Yuwei Zhang <yuwei1.zhang@intel.com>
This patch is to accommodate an experimental feature of mbuf - external
buffer attachment. If mbuf is attached to an external buffer, its ol_flags
will have EXT_ATTACHED_MBUF set. Without enabling/using the feature,
everything remains same.
If PMD delivers Rx packets with non-direct mbuf, ol_flags should not be
overwritten. For mlx5 PMD, if Multi-Packet RQ is enabled, Rx packets could
be carried with externally attached mbufs.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch adds GRE checksum and sequence extension supports in addtion
to key extension to csum forwarding engine.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add VXLAN-GPE support to csum forwarding engine and rte flow.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The tx_ol_flags field was used in order to control the different
Tx offloads set. After the conversion to the new Ethdev Tx offloads API
it is not needed anymore as the offloads configuration is stored in
ethdev structs.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Instead of using USERx logs, register a dynamic log type, as introduced in
commit c1b5fa94a4 ("eal: support dynamic log types").
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The memzone header is often included without good reason.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds GSO support to the csum forwarding engine. Oversized
packets transmitted over a GSO-enabled port will undergo segmentation
(with the exception of packet-types unsupported by the GSO library).
GSO support is disabled by default.
GSO support may be toggled on a per-port basis, using the command:
"set port <port_id> gso on|off"
The maximum packet length (including the packet header and payload) for
GSO segments may be set with the command:
"set gso segsz <length>"
Show GSO configuration for a given port with the command:
"show port <port_id> gso"
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The GRO library provides two modes to reassemble packets. Currently, the
csum forwarding engine has supported to use the lightweight mode to
reassemble TCP/IPv4 packets. This patch introduces the heavyweight mode
for TCP/IPv4 GRO in the csum forwarding engine.
With the command "set port <port_id> gro on|off", users can enable
TCP/IPv4 GRO for a given port. With the command "set gro flush <cycles>",
users can determine when the GROed TCP/IPv4 packets are flushed from
reassembly tables. With the command "show port <port_id> gro", users can
display GRO configuration.
The GRO library doesn't re-calculate checksums for merged packets. If
users want the merged packets to have correct IP and TCP checksums,
please select HW IP checksum calculation and HW TCP checksum calculation
for the port which the merged packets are transmitted to.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
This patch enables TCP/IPv4 GRO library in csum forwarding engine.
By default, GRO is turned off. Users can use command "gro (on|off)
(port_id)" to enable or disable GRO for a given port. If a port is
enabled GRO, all TCP/IPv4 packets received from the port are performed
GRO. Besides, users can set max flow number and packets number per-flow
by command "gro set (max_flow_num) (max_item_num_per_flow) (port_id)".
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
When ipv6 packet is tunnel packet, "PKT_TX_OUTER_IPV6" flag must
be set, to let prepare the correct mbuf meta data for tx forward.
Fixes: 2b76648872 ("net/e1000: add Tx preparation")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Change the size of m->port and m->nb_segs to 16 bits. It is now possible
to reference a port identifier larger than 256 and have a mbuf chain
larger than 256 segments.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Since all current drivers supports Tx preparation API, it is used
in csum forwarding engine by default for all drivers.
Adding additional step to the csum engine costs about 3-4% of performance
drop, on my setup with ixgbe driver. It's caused mostly by the need
of reaccessing and modification of packet data.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add basic management functions for the generic flow API (validate, create,
destroy, flush, query and list). Flow rule objects and properties are
arranged in lists associated with each port.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
In csumonly engine, display the value of LRO segment if the
LRO flag is set.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Following discussions in [1] and [2], introduce a new bit to
describe the Rx checksum status in mbuf.
Before this patch, only one flag was available:
PKT_RX_L4_CKSUM_BAD: L4 cksum of RX pkt. is not OK.
And same for L3:
PKT_RX_IP_CKSUM_BAD: IP cksum of RX pkt. is not OK.
This had 2 issues:
- it was not possible to differentiate "checksum good" from
"checksum unknown".
- it was not possible for a virtual driver to say "the checksum
in packet may be wrong, but data integrity is valid".
This patch tries to solve this issue by having 4 states (2 bits)
for the IP and L4 Rx checksums. New values are:
- PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
-> the application should verify the checksum by sw
- PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
-> the application can drop the packet without additional check
- PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
-> the application can accept the packet without verifying the
checksum by sw
- PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
data, but the integrity of the L4 data is verified.
-> the application can process the packet but must not verify the
checksum by sw. It has to take care to recalculate the cksum
if the packet is transmitted (either by sw or using tx offload)
And same for L3 (replace L4 by IP in description above).
This commit tries to be compatible with existing applications that
only check the existing flag (CKSUM_BAD).
[1] http://dpdk.org/ml/archives/dev/2016-May/039920.html
[2] http://dpdk.org/ml/archives/dev/2016-June/040007.html
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The commit that disabled TSO for small packets was broken during the
rebase. The problem is the IP checksum is not calculated in software if:
- TX IP checksum is disabled
- TSO is enabled
- the current packet is smaller than tso segment size
When checking if the PKT_TX_IP_CKSUM flag should be set (in case
of tso), use the local tso_segsz variable, which is set to 0 when the
packet is too small to require tso. Therefore the IP checksum will be
correctly calculated in software.
Moreover, we should not use tunnel segment size for non-tunnel tso, else
TSO will stay disabled for all packets.
Fixes: 97c21329d4 ("app/testpmd: do not use TSO for small packets")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
When TSO is not asked, hide the segment size.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Asking for TSO (TCP Segmentation Offload) on packets that are already
smaller than (headers + MSS) does not work, for instance on ixgbe.
Fix the csumonly engine to only set the TSO flag when a segmentation
offload is really required, i.e. when packet is large enough.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This information is useful when debugging, especially with
bidirectional traffic.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The csum forward engine was updated to change the IP addresses in the
packet data in
commit 51f694dd40 ("app/testpmd: rework checksum forward engine")
This was done to ensure that the checksum is correctly reprocessed when
using hardware checksum offload. But the functions
process_inner_cksums() and process_outer_cksums() already reset the
checksum field to 0, so this is not necessary.
Moreover, this makes the engine more complex than needed, and prevents
to easily use it to forward traffic (like iperf) as it modifies the
packets.
This patch drops this behavior.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>