Protocol header sequence checking is supported in the ethdev library,
the application does not need to do it again.
Coverity issue: 381396
Fixes: 52e2e7edcf ("app/testpmd: add protocol-based buffer split")
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Some PMDs (e.g. hns3) could detect hardware or firmware errors, one
error recovery mode is to report RTE_ETH_EVENT_INTR_RESET event, and
wait for application invoke rte_eth_dev_reset() to recover the port,
however, this mode has the following weaknesses:
1) Due to different hardware and software design, some NIC port recovery
process requires multiple handshakes with the firmware and PF (when the
port is VF). It takes a long time to complete the entire operation for
one port, If multiple ports (for example, multiple VFs of a PF) are
reset at the same time, other VFs may fail to be reset. (Because the
reset processing is serial, the previous VFs must be processed before
the subsequent VFs).
2) The impact on the application layer is great, and it should stop
working queues, stop calling Rx and Tx functions, and then call
rte_eth_dev_reset(), and re-setup all again.
This patch introduces proactive error handling mode, the PMD will try
to recover from the errors itself. In this process, the PMD sets the
data path pointers to dummy functions (which will prevent the crash),
and also make sure the control path operations failed with retcode
-EBUSY.
Because the PMD recovers automatically, the application can only sense
that the data flow is disconnected for a while and the control API
returns an error in this period.
In order to sense the error happening/recovering, three events were
introduced:
1) RTE_ETH_EVENT_ERR_RECOVERING: used to notify the application that it
detected an error and the recovery is being started. Upon receiving the
event, the application should not invoke any control path APIs until
receiving RTE_ETH_EVENT_RECOVERY_SUCCESS or
RTE_ETH_EVENT_RECOVERY_FAILED event.
2) RTE_ETH_EVENT_RECOVERY_SUCCESS: used to notify the application that
it recovers successful from the error, the PMD already re-configures the
port, and the effect is the same as that of the restart operation.
3) RTE_ETH_EVENT_RECOVERY_FAILED: used to notify the application that it
recovers failed from the error, the port should not usable anymore. The
application should close the port.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Add command line parameter:
--rxhdrs=eth[,ipv4]
Set the protocol_hdr of segments to scatter packets on receiving if
split feature is engaged. And the queues with BUFFER_SPLIT flag.
Add interactive mode command:
testpmd>set rxhdrs eth,ipv4,ipv4-udp
(protocol sequence should be valid)
The protocol split feature is off by default. To enable protocol split,
you need:
1. Start testpmd with multiple mempools. E.g. --mbuf-size=2048,2048
2. Configure Rx queue with rx_offload buffer split on.
3. Set the protocol type of buffer split. E.g. set rxhdrs eth,eth-ipv4
(default protocols of testpmd : eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp|
ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|grenat|inner-eth|
inner-ipv4|inner-ipv6|inner-ipv4-tcp|inner-ipv6-tcp|
inner-ipv4-udp|inner-ipv6-udp|inner-ipv4-sctp|inner-ipv6-sctp)
Above protocols can be configured in testpmd. But the configuration can
only be applied when it is supported by specific pmd.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
This patch extends hairpin-mode command line option of test-pmd
application with an ability to configure whether Rx/Tx hairpin queue
should use locked device memory or RTE memory.
For purposes of this configurations the following bits of 32 bit
hairpin-mode are reserved:
- Bit 8 - If set, then force_memory flag will be set for hairpin RX
queue.
- Bit 9 - If set, then force_memory flag will be set for hairpin TX
queue.
- Bits 12-15 - Memory options for hairpin Rx queue:
- Bit 12 - If set, then use_locked_device_memory will be set.
- Bit 13 - If set, then use_rte_memory will be set.
- Bit 14 - Reserved for future use.
- Bit 15 - Reserved for future use.
- Bits 16-19 - Memory options for hairpin Tx queue:
- Bit 16 - If set, then use_locked_device_memory will be set.
- Bit 17 - If set, then use_rte_memory will be set.
- Bit 18 - Reserved for future use.
- Bit 19 - Reserved for future use.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Remove deprecated fdir_conf from device configuration.
Assume that mode is equal to RTE_FDIR_MODE_NONE.
Add internal Flow Director configuration copy in ixgbe and txgbe device
private data since flow API supports requires it. Initialize mode to
the first flow rule mode on the rule validation or creation.
Since Flow Director configuration data types are still used by some
drivers internally, move it from public API to ethdev driver internal
API.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Dongdong Liu <liudongdong3@huawei.com>
Those commands date back to the early stages of DPDK when only PCI
devices were supported.
At the time, developers may have used those commands to help in
debugging their buggy^Wwork in progress drivers.
Removing them, we can drop the dependency on the PCI bus and library and
make testpmd bus agnostic.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds help messages for multi-process.
--num-procs=N: set the total number of multi-process instances.
--proc-id=id: set the id of the current process from multi-process
instances(0 <= id < num-procs).
Fixes: a550baf24a ("app/testpmd: support multi-process")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Add macro MIN_TOTAL_NUM_MBUFS (1024) to indicate
what the value of total-num-mbufs should bigger than.
Fixes: c87988187f ("app/testpmd: add --total-num-mbufs option")
Cc: stable@dpdk.org
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Acked-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Remove the unnecessary rte_atomic.h included in app modules.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
The 'proc_id' should be less than 'num_procs', if not, exit the testpmd
and show the error message.
Fixes: a550baf24a ("app/testpmd: support multi-process")
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Adds "--rxq-share=X" parameter to enable shared RxQ.
Rx queue is shared if device supports, otherwise fallback to standard
RxQ.
Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Add 'display-xstats' option for using in accompanying with Rx/Tx statistics
(i.e. 'stats-period' option or 'show port stats' interactive command) to
display specified list of extended statistics.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.
Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This patch adds multi-process support for testpmd.
For example the following commands run two testpmd
processes:
* the primary process:
./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=0
* the secondary process:
./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=1
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Make number of flows in flowgen configurable by setting parameter
--flowgen-flows=N.
Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Running with stdout suppressed or redirected for further processing
is very confusing in the case of errors. Fix it by logging errors and
warnings to stderr.
Since lines with log messages are touched anyway concatenate split
format strings to make it easier to search using grep.
Fix indent of format string arguments.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
- Make printf format OS independent
- Replace htons with RTE_BE16
- Replace POSIX specific inet_aton with OS independent inet_pton
- Replace sleep with rte_delay_us_sleep
- Replace random with rte_rand
- #ifndef mman related code for now
- Fix header inclusion
- Include rte_os_shim.h in testpmd.h
- Remove redundant headers
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
The options help text was including an incomplete and redundant
summary of the options before explaining each. The summary is dropped.
The details of the option --hairpin-mode had an extra space,
breaking the alignment with the next line.
There were some mismatches between options in the usage text
sed -rn 's/.*\(" *--([a-z-]*)[=: ].*/\1/p' app/test-pmd/parameters.c
and the options declared in lgopts array
sed -rn 's/.*\{.*"(.*)",.*,.*,.*},.*/\1/p' app/test-pmd/parameters.c
The misses were:
--no-numa
--enable-scatter
--tx-ip
--tx-udp
--noisy-lkup-num-reads-writes
The option --ports was not implemented.
Fixes: 01817b10d2 ("app/testpmd: change hairpin queues setup")
Fixes: 3c156061b9 ("app/testpmd: add noisy neighbour forwarding mode")
Fixes: bf5b2126bf ("app/testpmd: add ability to set Tx IP and UDP parameters")
Fixes: 0499793854 ("app/testpmd: add scatter enabling option")
Fixes: 999b2ee0fe ("app/testpmd: enable NUMA support by default")
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Bing Zhao <bingz@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Some applications were printing useless messages with rte_exit()
after showing the help. Using exit() is enough in this case.
Some applications were using a redundant printf or fprintf() before
calling rte_exit(). The messages are unified in a single rte_exit().
Some rte_exit() calls were missing a line feed or returning a wrong code.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Add support for forced ethernet speed setting.
Currently testpmd tries to configure the Ethernet port in autoneg mode.
It is not possible to set the Ethernet port to a specific speed while
starting testpmd. In some cases capability to configure a forced speed
for the Ethernet port during initialization may be necessary. This patch
tries to add this support.
The patch assumes full duplex setting and does not attempt to change that.
So speeds like 10M, 100M are not configurable using this method.
The command line to configure a forced speed of 10G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 10000
The command line to configure a forced speed of 50G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 50000
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When testing high performance numbers, it is often that CPU performance
limits the max values device can reach (both in pps and in gbps)
Here instead of recreating each packet separately, we use clones counter
to resend the same mbuf to the line multiple times.
PMDs handle that transparently due to reference counting inside of mbuf.
Reaching max PPS on small packet sizes helps here:
Some data from our 2 port x 50G device. Using 2*6 tx queues, 64b packets,
PowerEdge R7525, AMD EPYC 7452:
./build/app/dpdk-testpmd -l 32-63 -- --forward-mode=flowgen \
--rxq=6 --txq=6 --disable-crc-strip --burst=512 \
--flowgen-clones=0 --txd=4096 --stats-period=1 --txpkts=64
Gives ~46MPPS TX output:
Tx-pps: 22926849 Tx-bps: 11738590176
Tx-pps: 23642629 Tx-bps: 12105024112
Setting flowgen-clones to 512 pushes TX almost to our device
physical limit (68MPPS) using same 2*6 queues(cores):
Tx-pps: 34357556 Tx-bps: 17591073696
Tx-pps: 34353211 Tx-bps: 17588802640
Doing similar measurements per core, I see one core can do
6.9MPPS (without clones) vs 11MPPS (with clones)
Verified on Marvell qede and atlantic PMDs.
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When the max rx packet length is smaller than the sum of mtu size and
ether overhead size, it should be enlarged, otherwise the VLAN packets
will be dropped.
Removed the rx_offloads assignment for jumbo frame during command line
parsing, and set the correct jumbo frame flag if MTU size is larger than
the default value 'RTE_ETHER_MTU' within 'init_config()'.
Fixes: 384161e006 ("app/testpmd: adjust on the fly VLAN configuration")
Fixes: 35b2d13fd6 ("net: add rte prefix to ether defines")
Fixes: ce17eddefc ("ethdev: introduce Rx queue offloads API")
Fixes: 150c9ac2df ("app/testpmd: update Rx offload after setting MTU")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
displayed.
3) The configuration result does not take effect or can not be queried
in real time.
4) The mapping arrays, "tx_queue_stats_mappings_array" &
"rx_queue_stats_mappings_array" are global and their sizes are based
on fixed max port and queue size assumptions.
5) These record structures, 'map_port_queue_stats_mapping_registers()'
and its sub functions are redundant for majority of drivers.
6) The display of the queue stats and queue stats mapping is mixed
together.
Since xstats is used to obtain queue statistics, we have made the
following simplifications and adjustments:
1) If PMD requires and supports queue stats mapping, configure to driver
in real time by calling ethdev API after executing the command "set
stat_qmap rx/tx ...". If not, the command can not be accepted.
2) Based on the above adjustments, these record structures,
'map_port_queue_stats_mapping_registers()' and its sub functions can
be removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping"
parameters, and 'parse_queue_stats_mapping_config()' can be removed
too.
3) remove display of queue stats mapping in 'fwd_stats_display()' &
'nic_stats_display()', and obtain queue stats by xstats. Since the
record structures are removed, 'nic_stats_mapping_display()' can be
deleted.
Fixes: 4dccdc789b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b6 ("app/testpmd: various updates")
Fixes: ed30d9b691 ("app/testpmd: add stats per queue")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Replace master lcore with main lcore and
replace slave lcore with worker lcore.
Keep the old functions and macros but mark them as deprecated
for this release.
The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Use the newer macros defined by meson in all DPDK source code, to ensure
there are no errors when the old non-standard macros are removed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Add command line parameter:
--rxoffs=X[,Y]
Sets the offsets of packet segments from the beginning of the
receiving buffer if split feature is engaged. Affects only the
queues configured with split offloads (currently BUFFER_SPLIT
is supported only).
Add interactive mode command, providing the same:
testpmd> set rxoffs (x[,y]*)
Where x[,y]* represents a CSV list of values, without white space.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add command line parameter:
--rxpkts=X[,Y]
Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only).
Add interactive mode command:
testpmd> set rxpkts (x[,y]*)
Where x[,y]* represents a CSV list of values, without white space.
Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only). Optionally the
multiple memory pools can be specified with --mbuf-size command line
parameter and the mbufs to receive will be allocated sequentially
from these extra memory pools.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The command line parameter --mbuf-size is updated, it can handle
the multiple values like the following:
--mbuf-size=2176,512,768,4096
specifying the creation the extra memory pools with the requested
mbuf data buffer sizes. If some buffer split feature is engaged
the extra memory pools can be used to configure the Rx queues
with rte_the_dev_rx_queue_setup_ex().
The extra pools are created with requested sizes, and pool names
are assigned with appended index: mbuf_pool_socket_%socket_%index.
Index zero is used to specify the first mandatory pool to maintain
compatibility with existing code.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
A new parameter `hairpin-mode` is introduced to the testpmd command
line. Bitmask value is used to provide a more flexible configuration.
This parameter should be used when `hairpinq` is specified in the
command line.
Bit 0 in the LSB indicates the hairpin will use the loop mode. The
previous port Rx queue will be connected to the current port Tx
queue.
Bit 1 in the LSB indicates the hairpin will use pair port mode. The
even index port will be paired with the next odd index port. If the
total number of the probed ports is odd, then the last one will be
paired to itself.
If this byte is zero, then each port will be paired to itself.
Bit 0 takes a higher priority in the checking.
Bit 4 in the second bytes indicate if the hairpin will use explicit
Tx flow mode.
e.g. in the command line, "--hairpinq=2 --hairpin-mode=0x11"
If not set, default value zero will be used and the behavior will
try to get aligned with the previous single port mode. If the ports
belong to different vendors' NICs, it is suggested to use the `self`
hairpin mode only.
Since hairpin configures the hardware resources, the port mask of
packets forwarding engine will not be used here.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
IANA has assigned port 6081 as the fixed well-known destination port for
GENEVE. Nevertheless draft-ietf-nvo3-geneve-09 recommends that
implementations make this configurable. This commit enables specifying
any positive UDP destination port number for GENEVE protocol parsing.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Adding support to set RSS level from ethdev config.
level-default will requests the default behavior.
level-outer will requests RSS to be performed on the outermost packet
encapsulation level.
level-inner will request RSS to be performed on the specified inner
packet encapsulation level, from outermost to innermost.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The bitrate library in DPDK is actually in a "bitratestats" directory,
so that is used by meson for the macro and library name.
Therefore, we need to update references to RTE_LIBRTE_BITRATE to
RTE_LIBRTE_BITRATESTATS in testpmd to have it found. Rather than
supporting both defines, since make is being removed, we can just
replace all instances of the former define with the latter.
To ensure testpmd links ok when this is done, we also need to add
bitratestats to the list of library dependencies.
Fixes: 5b9656b157 ("lib: build with meson")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The new 5-tuple swap engine swaps:
source and destination mac address,
source and destination address in ipv4/ipv6,
source and destination port in UDP/TCP.
The forwarding engine will parse each layer
and swap it, and will stop when the next
layer doesn't match.
The mentioned headers of ICMP/ARP/Multicast
packets will be swapped as well according to
matching layers.
usage: --forward-mode=5tswap
Signed-off-by: Shiri Kuzin <shirik@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
One new cmdline option `--rx-mq-mode` is added in order to have the
possibility to check whether PMD handle the mq mode correctly or not.
The reason is some NICs need to do different settings based on different
RX mq mode, i.e RSS or not.
With this support in testpmd, the above scenario can be tested easily.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Currently, there is no way to check the aging event or to get the
current aged flows in testpmd, this patch include those implements, it's
included:
- Add new item "flow_aged" to the current print event command arguments.
- Add new command to list all aged flows, meanwhile, we can set
parameter to destroy it.
Signed-off-by: Dong Zhou <dongz@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The EAL options and app-specific options are separated
with double dashes.
The help of testpmd, test-acl and pdump were missing
the dashes after EAL options.
Note: testpmd was completely missing the EAL options.
Fixes: af75078fec ("first public release")
Fixes: 26c057ab6c ("acl: new test-acl application")
Fixes: b2854d5317 ("app/pdump: support multi-core capture")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In current version, we are setting the ports
using portmask. With portmask, we can use only
up to 64 ports. This portlist option enables the user
to use more than 64 ports.
Now we can specify the ports in 2 different ways
- Using portmask (-p [0x]nnn): mask must be in hex format
- Using portlist in the following format
--portlist <p1>[-p2][,p3[-p4],...]
--portmask 0x2 is same as --portlist 1
--portmask 0x3 is same as --portlist 0-1
Signed-off-by: Hariprasad Govindharajan <hariprasad.govindharajan@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
The new mbuf pool type is added to testpmd. To engage the
mbuf pool with externally attached data buffers the parameter
"--mp-alloc=xbuf" should be specified in testpmd command line.
The objective of this patch is just to test whether mbuf pool
with externally attached data buffers works OK. The memory for
data buffers is allocated from DPDK memory, so this is not
"true" external memory from some physical device (this is
supposed the most common use case for such kind of mbuf pool).
The user should be aware that not all drivers support the mbuf
with EXT_ATTACHED_BUF flags set in newly allocated mbuf (many
PMDs just overwrite ol_flags field and flag value is getting
lost).
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
We currently do not check that a non option string has been passed to
testpmd.
Example:
$ ./master/app/testpmd --no-huge -m 512 --vdev net_null0 \
--vdev net_null1 -- -i nb-cores=2 --total-num-mbuf 2048
[...]
testpmd> show config fwd
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
Here nb-cores=2 is just ignored, while the (probably sleepy) user did not
notice this.
Validate that all strings passed to testpmd are part of a known option.
After this patch:
$ ./master/app/testpmd --no-huge -m 512 --vdev net_null0 \
--vdev net_null1 -- -i nb-cores=2 --total-num-mbuf 2048
[...]
Invalid parameter: nb-cores=2
EAL: Error - exiting with code: 1
Cause: Command line incorrect
While at it, when passing an unknown option, print the string that gets
refused by getopt_long to help the user.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This commit introduce the hairpin queues to the testpmd.
the hairpin queue is configured using --hairpinq=<n>
the hairpin queue adds n queue objects for both the total number
of TX queues and RX queues.
The connection between the queues are 1 to 1, first Rx hairpin queue
will be connected to the first Tx hairpin queue
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The testpmd --help option did not show all possible choices for port
topology previously. The loop topology option is now added.
Fixes: 3e2006d618 ("app/testpmd: add loopback topology")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>