It was impossible to include netinet/in.h and rte_ip.h
because the IP protocols were redefined.
It is removed because useless.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Ivan Boule <ivan.boule@6wind.com>
Since commit 8a387fa85f02 ("ethdev: more RSS flags") in DPDK 1.7,
RSS flags have increased.
According to rss_hf definition in rte_eth_rss_conf, it shall be uint64 type.
Using uint16 will get truncated value, and cause incorrect output. This
fix corrected this issue.
Signed-off-by: Jia Yu <jyu@vmware.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
This patch fixes compiling problems on IBM Power architecture and turn
on the test-pmd compiling option in configuration file. Actually, this
is an big endian compiling fix.
Signed-off-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: David Marchand <david.marchand@6wind.com>
If the user specifies 'set verbose 1' in testpmd command line,
the csum forward engine will dump some informations about received
and transmitted packets, especially which flags are set and what
values are assigned to l2_len, l3_len, l4_len and tso_segsz.
This can help someone implementing TSO or hardware checksum offload to
understand how to configure the mbufs.
Example of output for one packet:
--------------
rx: l2_len=14 ethertype=800 l3_len=20 l4_proto=6 l4_len=20
tx: m->l2_len=14 m->l3_len=20 m->l4_len=20
tx: m->tso_segsz=800
tx: flags=PKT_TX_IP_CKSUM PKT_TX_TCP_SEG
--------------
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Add two new commands in testpmd:
- tso set <segsize> <portid>
- tso show <portid>
These commands can be used enable TSO when transmitting TCP packets in
the csum forward engine. Ex:
set fwd csum
tx_checksum set ip hw 0
tso set 800 0
start
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Some of the NICs supported by DPDK have a possibility to accelerate TCP
traffic by using segmentation offload. The application prepares a packet
with valid TCP header with size up to 64K and deleguates the
segmentation to the NIC.
Implement the generic part of TCP segmentation offload in rte_mbuf. It
introduces 2 new fields in rte_mbuf: l4_len (length of L4 header in bytes)
and tso_segsz (MSS of packets).
To delegate the TCP segmentation to the hardware, the user has to:
- set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
PKT_TX_TCP_CKSUM)
- set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- set PKT_TX_IP_CKSUM if it's IPv4, and set the IP checksum to 0 in
the packet
- fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
- calculate the pseudo header checksum without taking ip_len in account,
and set it in the TCP header, for instance by using
rte_ipv4_phdr_cksum(ip_hdr, ol_flags)
The API is inspired from ixgbe hardware (the next commit adds the
support for ixgbe), but it seems generic enough to be used for other
hw/drivers in the future.
This commit also reworks the way l2_len and l3_len are used in igb
and ixgbe drivers as the l2_l3_len is not available anymore in mbuf.
Signed-off-by: Mirek Walukiewicz <miroslaw.walukiewicz@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce new functions to calculate checksums. These new functions
are derivated from the ones provided csumonly.c but slightly reworked.
There is still some room for future optimization of these functions
(maybe SSE/AVX, ...).
This API will be modified in tbe next commits by the introduction of
TSO that requires a different pseudo header checksum to be set in the
packet.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The csum forward engine was becoming too complex to be used and
extended (the next commits want to add the support of TSO):
- no explaination about what the code does
- code is not factorized, lots of code duplicated, especially between
ipv4/ipv6
- user command line api: use of bitmasks that need to be calculated by
the user
- the user flags don't have the same semantic:
- for legacy IP/UDP/TCP/SCTP, it selects software or hardware checksum
- for other (vxlan), it selects between hardware checksum or no
checksum
- the code relies too much on flags set by the driver without software
alternative (ex: PKT_RX_TUNNEL_IPV4_HDR). It is nice to be able to
compare a software implementation with the hardware offload.
This commit tries to fix these issues, and provide a simple definition
of what is done by the forward engine:
* Receive a burst of packets, and for supported packet types:
* - modify the IPs
* - reprocess the checksum in SW or HW, depending on testpmd command line
* configuration
* Then packets are transmitted on the output port.
*
* Supported packets are:
* Ether / (vlan) / IP|IP6 / UDP|TCP|SCTP .
* Ether / (vlan) / IP|IP6 / UDP / VxLAN / Ether / IP|IP6 / UDP|TCP|SCTP
*
* The network parser supposes that the packet is contiguous, which may
* not be the case in real life.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
In testpmd the rte_port->tx_ol_flags flag was used in 2 incompatible
manners:
- sometimes used with testpmd specific flags (0xff for checksums, and
bit 11 for vlan)
- sometimes assigned to m->ol_flags directly, which is wrong in case
of checksum flags
This commit replaces the hardcoded values by named definitions, which
are not compatible with mbuf flags. The testpmd forward engines are
fixed to use the flags properly.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In test-pmd (rxonly.c), the code is able to dump the list of ol_flags.
The issue is that the list of flags in the application has to be
synchronized with the flags defined in rte_mbuf.h.
This patch introduces 2 new functions rte_get_rx_ol_flag_name()
and rte_get_tx_ol_flag_name() that returns the name of a flag from
its mask. It also fixes rxonly.c to use this new functions and to
display the proper flags.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Test command is added to configure flexible payload
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
test command added to configure flexible mask
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Test command is added to flush flow director table
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Extended fdir info is printed in rxonly fwd engine when fdir match.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Commands are added to test adding or deleting flow director filters.
10 flow types in flow director are supported: ipv4, ipv4-frag, tcpv4, udpv4, sctpv4, ipv6, ipv6-frag, tcpv6, udpv6, sctpv6
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As 40G NIC supports different sizes (128/512/64 entries) of
redirection table from that (128 entries) of 1G and 10G NICs,
support of multiple sizes of redirection table is needed.
It includes,
* Redefine 'struct rte_eth_rss_reta' in ethdev.
- To 'struct rte_eth_rss_reta_entry64' which contains 64
entries and 64 bits mask.
- Array of above new structure can be used for any number of
redirection table entries, as long as the number is multiple
of 64. This is quite flexible for the future expanding of
redirection table.
* Redefinition of relevant interfaces in ethdev.
- Interface of reta update has been redefined with new parameters.
- Interface of reta query has been redefined with new parameters.
* Rework of 1G PMD in igb.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 10G PMD in ixgbe.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 40G PMD (PF only) in i40e.
- reta update has been reworked.
- reta query has been reworked.
* Implement relevant commands in testpmd.
Test report: http://dpdk.org/ml/archives/dev/2014-November/008362.html
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Erlu Chen <erlu.chen@intel.com>
set link-up and set link-down were not included
in the help command.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
These references to drivers break the layering isolation between
application and drivers.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Add a test command in testpmd to test VF MAC filter feature.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add test cases in testpmd to test VxLAN Tx checksum offload, which includes
- IPv4 and IPv6 packet
- outer L3, inner L3 and L4 checksum offload for Tx side.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add the "tunnel_filter" command in testpmd to test the API of VxLAN
packet filter.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add two commands to test VXLAN packet identification.
The test steps are as follows:
1> use commands to add/delete VxLAN UDP port.
2> use rxonly mode to receive VxLAN packet.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Print an error message to the user when trying to start/stop a rx/tx queue and
this function is not supported by the PMD driver. The patch does not check if
the return value is -EINVAL because testpmd is already validating the port and
queue id.
Signed-off-by: Nicolás Pernas Maradei <nico@emutex.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
To improve performance by using bulk alloc or vectored RX routines, we
need to set rx free threshold (rxfreet) value to 32, so make this the
testpmd default.
Thirty-two is the minimum setting needed to enable either the
bulk alloc or vector RX routines inside the ixgbe driver, so it's
best made the default for that reason. Please see
"check_rx_burst_bulk_alloc_preconditions()" in ixgbe_rxtx.c, and
RX function assignment logic in "ixgbe_dev_rx_queue_setup()" in
the same file.
The difference in IO performance for testpmd when called without any
optional parameters, and using 10G NICs using the ixgbe driver, can be
significant - approx 25% or more.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add a new token in "show port" command to dump the extended statistics
of a device. It validates the new xstats framework added in previous commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The offload flags field (ol_flags) was 16-bits and had no further room
for expansion. This patch increases the field size to 64-bits, using up
the remaining reserved space in the single-cache-line mbuf.
NOTE: none of the values for existing flags have been changed, i.e. no
new numbers have been explicitly reserved between existing flag
definitions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The mbuf structure already contains a pointer to the beginning of the
buffer (m->buf_addr). It is not needed to use 8 bytes again to store
another pointer to the beginning of the data.
Using a 16 bits unsigned integer is enough as we know that a mbuf is
never longer than 64KB. We gain 6 bytes in the structure thanks to
this modification.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated to apply to latest on mainline.
* Disabled vector PMD in config as it relies heavily on the mbuf layout
This will be re-enabled in a subsequent commit once vPMD has been
reworked to take account of mbuf changes.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The vlan_macip structure combined a vlan tag id with l2 and l3 headers
lengths for tracking offloads. However, this structure was only used as
a unit by the e1000 and ixgbe drivers, not generally.
This patch removes the structure from the mbuf header and places the
fields into the mbuf structure directly at the required point, without
any net effect on the structure layout. This allows us to treat the vlan
tags and header length fields as separate for future mbuf changes. The
drivers which were written to use the combined structure still do so,
using a driver-local definition of it.
Reduce perf regression caused by splitting vlan_macip field. This is
done by providing a single uint16_t value to allow writing/clearing
the l2 and l3 lengths together. There is still a small perf hit to the
slow path TX due to the reads from vlan_tci and l2/l3 lengths being
separated. (<5% in my tests with testpmd with no extra params).
Unfortunately, this cannot be eliminated, without restoring the vlan
tags and l2/l3 lengths as a combined 32-bit field. This would prevent
us from ever looking to move those fields about and is an artificial tie
that applies only for performance in igb and ixgbe drivers. Therefore,
this patch keeps the vlan_tci field separate from the lengths as the
best solution going forward.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The rte_pktmbuf structure was initially included in the rte_mbuf
structure. This was needed when there was 2 types of mbuf (ctrl and
packet). As the control mbuf has been removed, we can merge the
rte_pktmbuf into the rte_mbuf structure.
Advantages of doing this:
- the access to mbuf fields is easier (ex: m->data instead of m->pkt.data)
- make the structure more consistent: for instance, there was no reason
to have the ol_flags field in rte_mbuf
- it will allow a deeper reorganization of the rte_mbuf structure in the
next commits, allowing to gain several bytes in it
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
[Bruce: updated for latest code and new example apps]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The initial role of rte_ctrlmbuf is to carry generic messages (data
pointer + data length) but it's not used by the DPDK or it applications.
Keeping it implies:
- loosing 1 byte in the rte_mbuf structure
- having some dead code rte_mbuf.[ch]
This patch removes this feature. Thanks to it, it is now possible to
simplify the rte_mbuf structure by merging the rte_pktmbuf structure
in it. This is done in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated patch to HEAD.
* Modified patch to retain the old function names for ctrl mbufs as
macros. This helps with app compatibility, and allows the concept
of a control mbuf to be reintroduced via a single-bit flag in
a future change.
* Updated the packet framework ip_pipeline example application to
work following this change.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This crash was believed fixed by commit 5886ae07d211e4b5e49806dd183812,
but the actual issue is that the core ID provided to rte_lcore_to_socket_id()
is wrong. It must be looked up in fwd_lcores_cpuids[].
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
No need to test some build option multiple times in a Makefile.
Besides, such option is needed by the associated app, so move it at the
top of the Makefile.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
The API provides functions to start/stop specific RX/TX queues (see 0748be2).
This change adds command in testpmd to start/stop specific RX/TX queues.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Reviewed-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Changchun Ouyang <changchun.ouyang@intel.com>
Reviewed-by: Huawei Xie <huawei.xie@intel.com>
We might want to only change a parameter rather than have to set all possible
parameters, so add "partial" commands.
These commands only change the specified parameter.
To avoid duplicating code all around, a unique parser is kept. This parser uses
the .data parameter to select the right behavior.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Move parser after declarations to prepare rework in next commit.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Following commit 2d95b84aaacb3d2d0bd70367c0530d15e0cbb14e, rte_eth_fc_conf
struct contains a autoneg field that must be set by callers.
Add this parameter to testpmd.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The function rte_snprintf serves no useful purpose. It is the
same as snprintf() for all valid inputs. Deprecate it and
replace all uses in current code.
Leave the tests for the deprecated function in place.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
We usually use testpmd fwd to demonstrate IO forwarding throughput.
For best throughput, it has to assign special parameters to testpmd.
To make it easier to run, now set it as defalut value.
Such parameters are MBUF Mempool Cache and RX/TX threshold registers.
MBCACHE: 250
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Yong Liu <yong.liu@intel.com>
Tested-by: Zhaochen Zhan <zhaochen.zhan@intel.com>
The vpmd RX don't accept burst size less than 32.
As vPMD is set =y by default, while default testpmd burst size is 16.
Which will cause RX nothing if not assign burst size correctly.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Yong Liu <yong.liu@intel.com>
Tested-by: Zhaochen Zhan <zhaochen.zhan@intel.com>
i40e supports 16 and 32 bytes RX descriptors which can be configured.
It needs to check the driver type and the configuration to determine
if 16 or 32 bytes RX descriptors is being used, for reading and
displaying the different sizes of RX descriptors.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>