Since we change these macros, we might as well avoid triggering complaints
from checkpatch because of mixed case.
old=RTE_IPv4
new=RTE_IPV4
git grep -lw $old | xargs sed -i -e "s/\<$old\>/$new/g"
old=RTE_ETHER_TYPE_IPv4
new=RTE_ETHER_TYPE_IPV4
git grep -lw $old | xargs sed -i -e "s/\<$old\>/$new/g"
old=RTE_ETHER_TYPE_IPv6
new=RTE_ETHER_TYPE_IPV6
git grep -lw $old | xargs sed -i -e "s/\<$old\>/$new/g"
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Add 'RTE_' prefix to defines:
- rename ETHER_ADDR_LEN as RTE_ETHER_ADDR_LEN.
- rename ETHER_TYPE_LEN as RTE_ETHER_TYPE_LEN.
- rename ETHER_CRC_LEN as RTE_ETHER_CRC_LEN.
- rename ETHER_HDR_LEN as RTE_ETHER_HDR_LEN.
- rename ETHER_MIN_LEN as RTE_ETHER_MIN_LEN.
- rename ETHER_MAX_LEN as RTE_ETHER_MAX_LEN.
- rename ETHER_MTU as RTE_ETHER_MTU.
- rename ETHER_MAX_VLAN_FRAME_LEN as RTE_ETHER_MAX_VLAN_FRAME_LEN.
- rename ETHER_MAX_VLAN_ID as RTE_ETHER_MAX_VLAN_ID.
- rename ETHER_MAX_JUMBO_FRAME_LEN as RTE_ETHER_MAX_JUMBO_FRAME_LEN.
- rename ETHER_MIN_MTU as RTE_ETHER_MIN_MTU.
- rename ETHER_LOCAL_ADMIN_ADDR as RTE_ETHER_LOCAL_ADMIN_ADDR.
- rename ETHER_GROUP_ADDR as RTE_ETHER_GROUP_ADDR.
- rename ETHER_TYPE_IPv4 as RTE_ETHER_TYPE_IPv4.
- rename ETHER_TYPE_IPv6 as RTE_ETHER_TYPE_IPv6.
- rename ETHER_TYPE_ARP as RTE_ETHER_TYPE_ARP.
- rename ETHER_TYPE_VLAN as RTE_ETHER_TYPE_VLAN.
- rename ETHER_TYPE_RARP as RTE_ETHER_TYPE_RARP.
- rename ETHER_TYPE_QINQ as RTE_ETHER_TYPE_QINQ.
- rename ETHER_TYPE_ETAG as RTE_ETHER_TYPE_ETAG.
- rename ETHER_TYPE_1588 as RTE_ETHER_TYPE_1588.
- rename ETHER_TYPE_SLOW as RTE_ETHER_TYPE_SLOW.
- rename ETHER_TYPE_TEB as RTE_ETHER_TYPE_TEB.
- rename ETHER_TYPE_LLDP as RTE_ETHER_TYPE_LLDP.
- rename ETHER_TYPE_MPLS as RTE_ETHER_TYPE_MPLS.
- rename ETHER_TYPE_MPLSM as RTE_ETHER_TYPE_MPLSM.
- rename ETHER_VXLAN_HLEN as RTE_ETHER_VXLAN_HLEN.
- rename ETHER_ADDR_FMT_SIZE as RTE_ETHER_ADDR_FMT_SIZE.
- rename VXLAN_GPE_TYPE_IPV4 as RTE_VXLAN_GPE_TYPE_IPV4.
- rename VXLAN_GPE_TYPE_IPV6 as RTE_VXLAN_GPE_TYPE_IPV6.
- rename VXLAN_GPE_TYPE_ETH as RTE_VXLAN_GPE_TYPE_ETH.
- rename VXLAN_GPE_TYPE_NSH as RTE_VXLAN_GPE_TYPE_NSH.
- rename VXLAN_GPE_TYPE_MPLS as RTE_VXLAN_GPE_TYPE_MPLS.
- rename VXLAN_GPE_TYPE_GBP as RTE_VXLAN_GPE_TYPE_GBP.
- rename VXLAN_GPE_TYPE_VBNG as RTE_VXLAN_GPE_TYPE_VBNG.
- rename ETHER_VXLAN_GPE_HLEN as RTE_ETHER_VXLAN_GPE_HLEN.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add 'rte_' prefix to structures:
- rename struct ether_addr as struct rte_ether_addr.
- rename struct ether_hdr as struct rte_ether_hdr.
- rename struct vlan_hdr as struct rte_vlan_hdr.
- rename struct vxlan_hdr as struct rte_vxlan_hdr.
- rename struct vxlan_gpe_hdr as struct rte_vxlan_gpe_hdr.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
testpmd only sets the L4 len in case of TCP packets.
some PMD's like tap rely on mbuf meta data to calculate csum
This will set the L4 len for UDP packets same as TCP
Fixes: 160c3dc9458c ("app/testpmd: introduce IP parsing functions in csum fwd engine")
CC: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Collect and prints the statistics for PKT_RX_EL4_CKSUM_BAD
errors.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Added outer-udp Tx HW checksum support for csum forward engine
if device supports DEV_TX_OFFLOAD_OUTER_UDP_CKSUM.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
This patch enables GSO for UDP/IPv4 packets. Oversized UDP/IPv4
packets transmitted over a GSO-enabled port will undergo segmentation.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Tested-by: Yuwei Zhang <yuwei1.zhang@intel.com>
This patch is to accommodate an experimental feature of mbuf - external
buffer attachment. If mbuf is attached to an external buffer, its ol_flags
will have EXT_ATTACHED_MBUF set. Without enabling/using the feature,
everything remains same.
If PMD delivers Rx packets with non-direct mbuf, ol_flags should not be
overwritten. For mlx5 PMD, if Multi-Packet RQ is enabled, Rx packets could
be carried with externally attached mbufs.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch adds GRE checksum and sequence extension supports in addtion
to key extension to csum forwarding engine.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add VXLAN-GPE support to csum forwarding engine and rte flow.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The tx_ol_flags field was used in order to control the different
Tx offloads set. After the conversion to the new Ethdev Tx offloads API
it is not needed anymore as the offloads configuration is stored in
ethdev structs.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Instead of using USERx logs, register a dynamic log type, as introduced in
commit c1b5fa94a46f ("eal: support dynamic log types").
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The memzone header is often included without good reason.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds GSO support to the csum forwarding engine. Oversized
packets transmitted over a GSO-enabled port will undergo segmentation
(with the exception of packet-types unsupported by the GSO library).
GSO support is disabled by default.
GSO support may be toggled on a per-port basis, using the command:
"set port <port_id> gso on|off"
The maximum packet length (including the packet header and payload) for
GSO segments may be set with the command:
"set gso segsz <length>"
Show GSO configuration for a given port with the command:
"show port <port_id> gso"
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The GRO library provides two modes to reassemble packets. Currently, the
csum forwarding engine has supported to use the lightweight mode to
reassemble TCP/IPv4 packets. This patch introduces the heavyweight mode
for TCP/IPv4 GRO in the csum forwarding engine.
With the command "set port <port_id> gro on|off", users can enable
TCP/IPv4 GRO for a given port. With the command "set gro flush <cycles>",
users can determine when the GROed TCP/IPv4 packets are flushed from
reassembly tables. With the command "show port <port_id> gro", users can
display GRO configuration.
The GRO library doesn't re-calculate checksums for merged packets. If
users want the merged packets to have correct IP and TCP checksums,
please select HW IP checksum calculation and HW TCP checksum calculation
for the port which the merged packets are transmitted to.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
This patch enables TCP/IPv4 GRO library in csum forwarding engine.
By default, GRO is turned off. Users can use command "gro (on|off)
(port_id)" to enable or disable GRO for a given port. If a port is
enabled GRO, all TCP/IPv4 packets received from the port are performed
GRO. Besides, users can set max flow number and packets number per-flow
by command "gro set (max_flow_num) (max_item_num_per_flow) (port_id)".
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
When ipv6 packet is tunnel packet, "PKT_TX_OUTER_IPV6" flag must
be set, to let prepare the correct mbuf meta data for tx forward.
Fixes: 2b76648872c9 ("net/e1000: add Tx preparation")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Change the size of m->port and m->nb_segs to 16 bits. It is now possible
to reference a port identifier larger than 256 and have a mbuf chain
larger than 256 segments.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Since all current drivers supports Tx preparation API, it is used
in csum forwarding engine by default for all drivers.
Adding additional step to the csum engine costs about 3-4% of performance
drop, on my setup with ixgbe driver. It's caused mostly by the need
of reaccessing and modification of packet data.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add basic management functions for the generic flow API (validate, create,
destroy, flush, query and list). Flow rule objects and properties are
arranged in lists associated with each port.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
In csumonly engine, display the value of LRO segment if the
LRO flag is set.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Following discussions in [1] and [2], introduce a new bit to
describe the Rx checksum status in mbuf.
Before this patch, only one flag was available:
PKT_RX_L4_CKSUM_BAD: L4 cksum of RX pkt. is not OK.
And same for L3:
PKT_RX_IP_CKSUM_BAD: IP cksum of RX pkt. is not OK.
This had 2 issues:
- it was not possible to differentiate "checksum good" from
"checksum unknown".
- it was not possible for a virtual driver to say "the checksum
in packet may be wrong, but data integrity is valid".
This patch tries to solve this issue by having 4 states (2 bits)
for the IP and L4 Rx checksums. New values are:
- PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
-> the application should verify the checksum by sw
- PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
-> the application can drop the packet without additional check
- PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
-> the application can accept the packet without verifying the
checksum by sw
- PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
data, but the integrity of the L4 data is verified.
-> the application can process the packet but must not verify the
checksum by sw. It has to take care to recalculate the cksum
if the packet is transmitted (either by sw or using tx offload)
And same for L3 (replace L4 by IP in description above).
This commit tries to be compatible with existing applications that
only check the existing flag (CKSUM_BAD).
[1] http://dpdk.org/ml/archives/dev/2016-May/039920.html
[2] http://dpdk.org/ml/archives/dev/2016-June/040007.html
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The commit that disabled TSO for small packets was broken during the
rebase. The problem is the IP checksum is not calculated in software if:
- TX IP checksum is disabled
- TSO is enabled
- the current packet is smaller than tso segment size
When checking if the PKT_TX_IP_CKSUM flag should be set (in case
of tso), use the local tso_segsz variable, which is set to 0 when the
packet is too small to require tso. Therefore the IP checksum will be
correctly calculated in software.
Moreover, we should not use tunnel segment size for non-tunnel tso, else
TSO will stay disabled for all packets.
Fixes: 97c21329d42b ("app/testpmd: do not use TSO for small packets")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
When TSO is not asked, hide the segment size.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Asking for TSO (TCP Segmentation Offload) on packets that are already
smaller than (headers + MSS) does not work, for instance on ixgbe.
Fix the csumonly engine to only set the TSO flag when a segmentation
offload is really required, i.e. when packet is large enough.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This information is useful when debugging, especially with
bidirectional traffic.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The csum forward engine was updated to change the IP addresses in the
packet data in
commit 51f694dd40f5 ("app/testpmd: rework checksum forward engine")
This was done to ensure that the checksum is correctly reprocessed when
using hardware checksum offload. But the functions
process_inner_cksums() and process_outer_cksums() already reset the
checksum field to 0, so this is not necessary.
Moreover, this makes the engine more complex than needed, and prevents
to easily use it to forward traffic (like iperf) as it modifies the
packets.
This patch drops this behavior.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Use the functions introduced in the previous commit to dump the offload
flags.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add a new command "tunnel_tso set <tso_segsz> <port>" to enable
segmentation offload and set MSS to tso_segsz. Another command,
"tunnel_tso show <port>" is added to show tunneled packet MSS.
Result 0 means tunnel_tso is disabled.
The original commands, "tso set <tso_segsz> <port>" and "tso show
<port>" are only reponsible for non-tunneled packets. And the new
commands are for tunneled packets.
Below conditions are needed to make it work:
a. tunnel TSO is supported by the NIC;
b. "csum parse_tunnel" must be set so that tunneled pkts are
recognized;
c. for tunneled pkts with outer L3 is IPv4, "csum set outer-ip"
must be set to hw, because after tso, total_len of outer IP
header is changed, and the checksum of outer IP header calculated
by sw should be wrong; that is not necessary for IPv6 tunneled
pkts because there's no checksum field to be filled anymore.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch adds retry option in testpmd to prevent most packet losses.
It can be enabled by "set fwd <mode> retry". All modes except rxonly
support this option.
Adding retry mechanism expands test case coverage to support scenarios
where packet loss affects test results.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
prefetch the next packet data address in advance in forwarding loop
for performance improvement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
For CSUM forwarding mode add ability to copy & split outgoing packet
into the new mbuf that consists of multiple segments.
For TXONLY and CSUM forwarding modes add ability to make number of
segments in the outgoing packet to vary on a per packet basis.
Number of segments and size of each segment is controlled by
'set txpkts' command.
Split policy is controlled by 'set txsplit' command.
Possible values are: on | off | rand.
Tha allows to increase test coverage for TX PMD codepaths.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
For some ethnet-switch like intel RRC, all the packet forwarded
out by DPDK will be dropped in switch side, so the packet
generator will never receive the packet.
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The extended unified packet type is now part of the standard ABI.
As mbuf struct is changed, the mbuf library version is incremented.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Not packing this causes -Wcast-align breakage on machines that are
strict on alignment. This patch fixes this bug.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fix trailing whitespace, space before tab and empty lines at end of file.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
[Thomas: fix indent and alignment in test_acl.h and test_sched.c]
Enhance csum fwd engine based on current TX checksum framework in order
to test TX Checksum offload for NVGRE packet.
It includes:
- IPv4 and IPv6 packet
- outer L3, inner L3 and L4 checksum offload for Tx side.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
The l4_len has also to be copied in mbuf in case we are offloading outer
IP checksum. Currently, TSO + outer checksum is not supported by any
driver but it will soon be supported by i40e.
Reported-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>