Commit Graph

242 Commits

Author SHA1 Message Date
Huisong Li
e46372d7b0 app/testpmd: fix port status of bonding slave device
Starting or stopping a bonded port also starts or stops all active slaves
under the bonded port. If this port is a bonded device, we need to modify
the port status of all slaves.

Fixes: 0e545d3047 ("app/testpmd: check stopping port is not in bonding")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
2022-05-19 09:10:23 +02:00
Alexander Kozyrev
d906fff518 app/testpmd: add async indirect actions operations
Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
f9bf7dff5d app/testpmd: add flow queue pull operation
Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
9cbbee1451 app/testpmd: add flow queue push operation
Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
ecdc927b99 app/testpmd: add async flow create/destroy operations
Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
c4b3887334 app/testpmd: add flow table management
Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
04cc665fab app/testpmd: add flow template management
Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
9ad3a41ab2 app/testpmd: add flow engine configuration
Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Viacheslav Ovsiienko
9fac5ca8ed app/testpmd: fix Tx scheduling interval
The first "set txtimes" command parameter specifies the time
interval between scheduled send bursts for single queue. This
interval should be the same for all the forwarding ports.
It requires to maintain the timing related variables on per
queue basis instead of per core, as currently implemented.
This resulted in wrong burst intervals if two or more cores
were generating the scheduled traffic for two or more ports
in txonly mode.

This patch moves the timing variable to the fstream structure.
Only txonly forwarding mode with enabled send scheduling is
affected.

Fixes: 4940344dab ("app/testpmd: add Tx scheduling command")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2022-01-14 18:02:30 +01:00
Ivan Malov
2490bb8971 app/testpmd: fix flow transfer proxy port handling
The current approach detects the proxy port on each port (re-)plug and
may spam the log with error messages if the PMD does not support flows.
As testpmd is a debug tool, it must not do such implicit port handling.
Instead, the new API should be called only when the user requests that.

Revoke the existing code. Implement an explicit command-line primitive
to let the user find the proxy port themselves. Provide relevant hints.

Fixes: 1179f05cc9 ("ethdev: query proxy port to manage transfer flows")

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-11-17 11:26:27 +01:00
David Marchand
6970401e97 build: make GRO/GSO libraries optional
GRO and GSO integration in testpmd is relatively self contained and easy
to extract.
Those libraries can be made optional as they provide standalone
features.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-11-17 12:48:22 +01:00
Ferruh Yigit
295968d174 ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-22 18:15:38 +02:00
Eli Britstein
6a8b64fd5e app/testpmd: fix packet burst spreading stats
RX/TX functions (rte_eth_rx_burst/rte_eth_tx_burst) get 'nb_pkts'
argument, which specifies the maximum number to receive/transmit.
It can be 0..nb_pkts, meaning nb_pkts+1 options.
Testpmd can provide statistics of the burst sizes ('set
record-burst-stats on') by incrementing an array cell of index
<burst-size>. This array is mistakenly [MAX_PKT_BURST] size. Receiving
the maximum burst will cause out of bound write.
Enlarge the spread stats array by one cell to fix it.

Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-22 04:23:15 +02:00
Xueming Li
5984037501 app/testpmd: add forwarding engine for shared Rx queue
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
2021-10-22 00:09:19 +02:00
Xueming Li
6574483365 app/testpmd: force shared Rx queue polled on same core
Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.

It's suggested to use same number of Rx queues and polling cores.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-10-22 00:09:15 +02:00
Xueming Li
f4d178c13b app/testpmd: add parameter for shared Rx queue
Adds "--rxq-share=X" parameter to enable shared RxQ.

Rx queue is shared if device supports, otherwise fallback to standard
RxQ.

Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-22 00:09:07 +02:00
Gregory Etelson
59f3a8acbc app/testpmd: add flex item commands
Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardware by default, it
can try to add the new protocol to port hardware.

Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.

Application must complete the following tasks to create a flow
rule that matches custom header:

1. Create flow item object in port hardware.
Application must provide custom header configuration to PMD.
PMD will use that configuration to create flex item object in
port hardware.

2. Create flex patterns to match. Flex pattern has a spec and a mask
components, like a regular flow item. Combined together, spec and mask
can target unique data sequence or a number of data sequences in the
custom header.
Flex patterns of the same flex item can have different lengths.
Flex pattern is identified by unique handler value.

3. Create a flow rule with a flex flow item that references
flow pattern.

Testpmd flex CLI commands are:

testpmd> flow flex_item create <port> <flex_id> <filename>

testpmd> set flex_pattern <pattern_id> \
         spec <spec data> mask <mask data>

testpmd> set flex_pattern <pattern_id> is <spec_data>

testpmd> flow create <port> ... \
/ flex item is <flex_id> pattern is <pattern_id> / ...

The patch works with the jansson library API.
A new optional dependency on jansson library is added for
testpmd. If jansson not detected the flex item functionality
is disabled.
Jansson development files must be present:
jansson.pc, jansson.h libjansson.[a,so]

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-20 19:00:26 +02:00
Gregory Etelson
2566c33c71 app/testpmd: add flow command parsing routine
testpmd flow creation is constructed from these procedures:
  1. receive string with flow rule description;
  2. parse input string and build flow parameters: port_id value,
     flow attributes, items array, actions array;
  3. create a flow rule from flow rule parameters.

Flow rule creation procedures are built as a pipeline. A new
procedure starts immediately after successful predecessor completion.
Due to this we have no dedicated routines providing intermediate
results for step 1-3 above.

The patch adds `flow_parse()` function call. It parses input string
and provides a caller with parsed data. This is a preparation step
for introducing flex item command processing.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-20 18:58:56 +02:00
Ferruh Yigit
b563c14212 ethdev: remove jumbo offload flag
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.

And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.

Removing this additional configuration for simplification.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2021-10-18 19:20:21 +02:00
Ferruh Yigit
1bb4a528c4 ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.

Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.

These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.

Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
  'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
  Ethernet frame overhead, and this overhead may be different from
  device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
  which adds additional confusion and some APIs and PMDs already
  discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
  field, this adds configuration complexity for application.

As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.

For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.

When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.

Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 19:20:20 +02:00
Jie Wang
655eae01f9 app/testpmd: fix RSS hash offload display
The driver may change RSS hash offloads in dev->data->dev_conf
during dev_configure which may cause port->dev_conf and port->rx_conf
contain outdated values.
Since testpmd uses its configuration structures to display offloads
configuration, it doesn't display RSS hash offload.

This patch updates the testpmd offloads from device configuration
to fix this issue.

Fixes: ce8d561418 ("app/testpmd: add port configuration settings")

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-15 13:27:05 +02:00
Ivan Ilchenko
63b7265717 app/testpmd: add option to display extended statistics
Add 'display-xstats' option for using in accompanying with Rx/Tx statistics
(i.e. 'stats-period' option or 'show port stats' interactive command) to
display specified list of extended statistics.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-14 14:40:51 +02:00
Ivan Malov
1179f05cc9 ethdev: query proxy port to manage transfer flows
Not all DPDK ports in a given switching domain may have the
privilege to manage "transfer" flows. Add an API to find a
port with sufficient privileges by any port in the domain.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
2021-10-14 13:42:59 +02:00
Alvin Zhang
a78040c990 app/testpmd: update forward engine beginning
For each forward engine, there may be some special conditions
must be met before the forwarding runs.

Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.

The best solution is each forward engine has a callback to check
whether these conditions are met, and then testpmd can call the
callback to determine whether the forwarding can be started.

There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates its
return value then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch calls the
callback before the forwarding stats is reset and then launches the
forwarding engine.

Bugzilla ID: 797
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-10-08 18:57:48 +02:00
Min Hu (Connor)
a550baf24a app/testpmd: support multi-process
This patch adds multi-process support for testpmd.
For example the following commands run two testpmd
processes:

 * the primary process:

./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
   --rxq=4 --txq=4 --num-procs=2 --proc-id=0

 * the secondary process:

./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
   --rxq=4 --txq=4 --num-procs=2 --proc-id=1

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
2021-09-07 15:29:03 +02:00
Zhihong Wang
861e768459 app/testpmd: add option for number of flows in flowgen
Make number of flows in flowgen configurable by setting parameter
--flowgen-flows=N.

Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-08-31 17:14:03 +02:00
Andrew Rybchenko
61a3b0e5e7 app/testpmd: send failure logs to stderr
Running with stdout suppressed or redirected for further processing
is very confusing in the case of errors. Fix it by logging errors and
warnings to stderr.

Since lines with log messages are touched anyway concatenate split
format strings to make it easier to search using grep.

Fix indent of format string arguments.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-07-24 15:12:57 +02:00
Jie Zhou
761f7ae130 app/testpmd: replace POSIX-specific code
- Make printf format OS independent
- Replace htons with RTE_BE16
- Replace POSIX specific inet_aton with OS independent inet_pton
- Replace sleep with rte_delay_us_sleep
- Replace random with rte_rand
- #ifndef mman related code for now
- Fix header inclusion
- Include rte_os_shim.h in testpmd.h
- Remove redundant headers

Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
2021-07-02 19:03:03 +02:00
Jie Zhou
ce0a4a1d5d app/testpmd: fix type of FEC mode parsing output
Passing an uint32_t pointer to an enum pointer parameter causes
pointer-sign warning on Windows (converts between pointers to
integer types with different sign), since enum is implicitly
converted to int on Windows.

And the current enum pointer parameter of that function is actually
misleading and should be fixed as an uint32_t pointer parameter.

Fixes: b19da32e31 ("app/testpmd: add FEC command")
Cc: stable@dpdk.org

Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-07-02 19:03:03 +02:00
Huisong Li
a690a070a4 app/testpmd: fix DCB forwarding configuration
After DCB mode is configured, the operations of port stop and port start
change the value of the global variable "dcb_test", As a result, the
forwarding configuration from DCB to RSS mode, namely,
“dcb_fwd_config_setup()” to "rss_fwd_config_setup()".

Currently, the 'dcb_flag' field in struct 'rte_port' indicates whether
the port is configured with DCB. And it is sufficient to have
'dcb_config' as a global variable to control the DCB test status. So
this patch deletes the "dcb_test".

In addition, setting 'dcb_config' at the end of init_port_dcb_config()
in case that ports fail to enter DCB mode.

Fixes: 900550de04 ("app/testpmd: add dcb support")
Fixes: ce8d561418 ("app/testpmd: add port configuration settings")
Fixes: 7741e4cf16 ("app/testpmd: VMDq and DCB updates")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-04-29 18:10:14 +02:00
Haifei Luo
f29fa2c59b app/testpmd: support policy actions per color
Add the create/del policy CLIs to support actions per color.
The CLIs are:
Create:  add port meter policy (port_id) (policy_id) g_actions (actions)
y_actions (actions) r_actions (actions)
Delete:  del port meter policy (port_id) (policy_id)

Examples:
testpmd> add port meter policy 0 1 g_actions rss / end y_actions end
r_actions drop / end
testpmd> del port meter policy 0 1

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-04-21 12:22:18 +02:00
Bing Zhao
4d07cbefe3 app/testpmd: add commands for conntrack
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
  set conntrack com peer ...
  set conntrack orig scale ...
  set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".

After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-04-20 01:24:57 +02:00
Bing Zhao
4b61b8774b ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.

The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.

There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
   within a flow rule. Such action is tied to its flow rule and
   cannot be reused.
2. the indirect action, in the past, named shared_action. It is
   created from a direct actioni, like count or rss, and then used
   in the flow rules with an object handle. The PMD will take care
   of the retrieve from indirect action to the direct action
   when it is referenced.

The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.

The old name "shared" is improper in a sense and should be replaced.

Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.

The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
   counter, the only "update" supported should be the reset. So
   passing a rte_flow_action struct pointer is meaningless and
   there is even no such corresponding action struct. What's more,
   if more than one operations should be supported, for some other
   action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
   parameter will not provide the ability to indicate which part(s)
   to update.
   For different types of indirect action objects, the pointer could
   either be the same of rte_flow_action* struct - in order not to
   break the current driver implementation, or some wrapper
   structures with bits as masks to indicate which part to be
   updated, depending on real needs of the corresponding direct
   action. For different direct actions, the structures of indirect
   action objects updating will be different.

All the underlayer PMD callbacks will be moved to these new APIs.

The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.

Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 18:25:42 +02:00
Haifei Luo
bf085dcba1 app/testpmd: add command for single flow dump
Add support for single flow dump.
The CLIs to dump one rule: flow dump PORT rule ID
to dump all: flow dump PORT all
Examples:
testpmd> flow dump 0 all
testpmd> flow dump 0 rule 0

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-04-14 13:19:55 +02:00
Natanael Copa
3529e8f3a5 app/testpmd: fix build with musl
1/ Improve portability by avoiding use of non-standard 'uint'.
Use uint8_t for hash_key_len as rss_key_len is a uint8_t type.
This solves following build error when building with musl libc:
    app/test-pmd/testpmd.h:813:29: error: unknown type name 'uint'

2/ In musl libc, stdout is of type (FILE * const).
Because of the const qualifier, a dark magic cast
must be achieved through uintptr_t.

Fixes: 8205e241b2 ("app/testpmd: add missing type to RSS hash commands")
Fixes: e977e4199a ("app/testpmd: add commands to load/unload BPF filters")
Cc: stable@dpdk.org

Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
2021-03-23 08:41:05 +01:00
Ajit Khaparde
b7b78a089c app/testpmd: support forced ethernet speed
Add support for forced ethernet speed setting.
Currently testpmd tries to configure the Ethernet port in autoneg mode.
It is not possible to set the Ethernet port to a specific speed while
starting testpmd. In some cases capability to configure a forced speed
for the Ethernet port during initialization may be necessary. This patch
tries to add this support.

The patch assumes full duplex setting and does not attempt to change that.
So speeds like 10M, 100M are not configurable using this method.

The command line to configure a forced speed of 10G:
dpdk-testpmd -c 0xff  -- -i  --eth-link-speed  10000

The command line to configure a forced speed of 50G:
dpdk-testpmd -c 0xff  -- -i  --eth-link-speed  50000

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-03-08 12:44:01 +01:00
Ferruh Yigit
ecf86ccb4b app/testpmd: remove duplicated offload display
"show port cap all|<port_id>" was to display offload configuration of
port(s).

But later two other commands added to show same information in more
accurate way:
 show port (port_id) rx_offload configuration
 show port (port_id) tx_offload configuration

These new commands can both show port and queue level configuration,
also with their capabilities counterparts easier to see offload
capability and configuration of the port in similar syntax.

So the functionality is duplicated and removing this version, to favor
the new commands.

Another problem with this command is it requires each new offload to be
added into the function to display them, and there were missing offloads
that are not displayed, this requirement for sure will create gaps by
time as new offloads added.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
2021-02-24 13:28:30 +01:00
Lance Richardson
d139cf231b app/testpmd: count outer IP checksum errors
Count and display outer IP checksum errors in the checksum
forwarder.

Example forwarder stats output:
  RX-packets: 158            RX-dropped: 0             RX-total: 158
  Bad-ipcsum: 48             Bad-l4csum: 48            Bad-outer-l4csum: 6
  Bad-outer-ipcsum: 40
  TX-packets: 0              TX-dropped: 0             TX-total: 0

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-02-23 19:22:34 +01:00
Kathleen Capella
28bbeaa23b app/testpmd: remove unused struct member
The tx_queue member of the fwd_lcore struct is unused as it is already
part of the fwd_stream structure. Deleting helps improve code readability.

Signed-off-by: Kathleen Capella <kathleen.capella@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-02-22 19:17:58 +01:00
Igor Russkikh
6c02043e99 app/testpmd: support sending cloned packets in flowgen
When testing high performance numbers, it is often that CPU performance
limits the max values device can reach (both in pps and in gbps)

Here instead of recreating each packet separately, we use clones counter
to resend the same mbuf to the line multiple times.

PMDs handle that transparently due to reference counting inside of mbuf.

Reaching max PPS on small packet sizes helps here:
Some data from our 2 port x 50G device. Using 2*6 tx queues, 64b packets,
PowerEdge R7525, AMD EPYC 7452:

./build/app/dpdk-testpmd -l 32-63  -- --forward-mode=flowgen \
  --rxq=6 --txq=6  --disable-crc-strip --burst=512 \
  --flowgen-clones=0 --txd=4096 --stats-period=1 --txpkts=64

Gives ~46MPPS TX output:

  Tx-pps:     22926849          Tx-bps:  11738590176
  Tx-pps:     23642629          Tx-bps:  12105024112

Setting flowgen-clones to 512 pushes TX almost to our device
physical limit (68MPPS) using same 2*6 queues(cores):

  Tx-pps:     34357556          Tx-bps:  17591073696
  Tx-pps:     34353211          Tx-bps:  17588802640

Doing similar measurements per core, I see one core can do
6.9MPPS (without clones) vs 11MPPS (with clones)

Verified on Marvell qede and atlantic PMDs.

Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-01-29 18:16:12 +01:00
Steve Yang
0c4abd3688 app/testpmd: fix setting maximum packet length
"port config all max-pkt-len" command fails because it doesn't set the
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag properly.

Commit in the fixes line moved the 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload
flag update from 'cmd_config_max_pkt_len_parsed()' to 'init_config()'.
'init_config()' function is only called during testpmd startup, but the
flag status needs to be calculated whenever 'max_rx_pkt_len' changes.

The issue can be reproduced as [1], where the 'max-pkt-len' reduced and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag should be cleared but it
didn't.

Adding the 'update_jumbo_frame_offload()' helper function to update
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag and 'max_rx_pkt_len'. This
function is called both by 'init_config()' and
'cmd_config_max_pkt_len_parsed()'.

Default 'max-pkt-len' value set to zero, 'update_jumbo_frame_offload()'
updates it to "RTE_ETHER_MTU + PMD specific Ethernet overhead" when it
is zero.
If '--max-pkt-len=N' argument provided, it will be used instead.
And with each "port config all max-pkt-len" command, the
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag, 'max-pkt-len' and MTU is
updated.

[1]
--------------------------------------------------------------------------
dpdk-testpmd -c 0xf -n 4 -- -i --max-pkt-len=9000 --tx-offloads=0x8000
	--rxq=4 --txq=4 --disable-rss
testpmd>  set verbose 3
testpmd>  port stop all
testpmd>  port config all max-pkt-len 1518
testpmd>  port start all

// Got fail error info without this patch
Configuring Port 0 (socket 1)
Ethdev port_id=0 rx_queue_id=0, new added offloads 0x800 must be
within per-queue offload capabilities 0x0 in rte_eth_rx_queue_setup()
Fail to configure port 0 rx queues //<-- Fail error info;
--------------------------------------------------------------------------

Bugzilla ID: 625
Fixes: 761c4d6690 ("app/testpmd: fix max Rx packet length for VLAN packets")
Cc: stable@dpdk.org

Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
2021-01-29 18:16:12 +01:00
Huisong Li
08dcd18706 app/testpmd: fix queue stats mapping configuration
Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
   failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
   displayed.
3) The configuration result does not take effect or can not be queried
   in real time.
4) The mapping arrays, "tx_queue_stats_mappings_array" &
   "rx_queue_stats_mappings_array" are global and their sizes are based
   on fixed max port and queue size assumptions.
5) These record structures, 'map_port_queue_stats_mapping_registers()'
   and its sub functions are redundant for majority of drivers.
6) The display of the queue stats and queue stats mapping is mixed
   together.

Since xstats is used to obtain queue statistics, we have made the
following simplifications and adjustments:
1) If PMD requires and supports queue stats mapping, configure to driver
   in real time by calling ethdev API after executing the command "set
   stat_qmap rx/tx ...". If not, the command can not be accepted.
2) Based on the above adjustments, these record structures,
   'map_port_queue_stats_mapping_registers()' and its sub functions can
   be removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping"
   parameters, and 'parse_queue_stats_mapping_config()' can be removed
   too.
3) remove display of queue stats mapping in 'fwd_stats_display()' &
   'nic_stats_display()', and obtain queue stats by xstats.  Since the
   record structures are removed, 'nic_stats_mapping_display()' can be
   deleted.

Fixes: 4dccdc789b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b6 ("app/testpmd: various updates")
Fixes: ed30d9b691 ("app/testpmd: add stats per queue")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-01-08 16:03:04 +01:00
Matan Azrad
de956d5ecf app/testpmd: support age shared action context
When an age action becomes aged-out the next call for
rte_flow_get_aged_flows API should return the action context supplied
by the action configuration structure.

In case the age action is created by the shared action API, the shared
action context of the Testpmd application was not set.

In addition, the application handler of the contexts returned by the
rte_flow_get_aged_flows API didn't consider the fact that the action
could be set by the shared action API and considered it as regular flow
context.

This caused a crash in Testpmd when the context is parsed.

This patch set context type in the flow and shared action context and
uses it to parse the aged-out contexts correctly.

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-13 19:43:26 +01:00
Andrew Rybchenko
1be514fbce ethdev: remove legacy FDIR filter type support
Instead of FDIR filters RTE flow API should be used.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:35:05 +01:00
Bruce Richardson
a8d0d473a0 build: replace use of old build macros
Use the newer macros defined by meson in all DPDK source code, to ensure
there are no errors when the old non-standard macros are removed.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-19 22:15:44 +02:00
Viacheslav Ovsiienko
2befc67ff6 app/testpmd: add extended Rx queue setup
If Rx queue is configured with split feature the extended
setup with specified segment sizes and pool will be performed.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:41 +02:00
Viacheslav Ovsiienko
91c78e090e app/testpmd: add rxoffs commands and parameters
Add command line parameter:

--rxoffs=X[,Y]

Sets the offsets of packet segments from the beginning of the
receiving buffer if split feature is engaged. Affects only the
queues configured with split offloads (currently BUFFER_SPLIT
is supported only).

Add interactive mode command, providing the same:

testpmd> set rxoffs (x[,y]*)

Where x[,y]* represents a CSV list of values, without white space.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:40 +02:00
Viacheslav Ovsiienko
0f2096d7ab app/testpmd: add rxpkts commands and parameters
Add command line parameter:

--rxpkts=X[,Y]

Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only).

Add interactive mode command:

testpmd> set rxpkts (x[,y]*)

Where x[,y]* represents a CSV list of values, without white space.

Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only). Optionally the
multiple memory pools can be specified with --mbuf-size command line
parameter and the mbufs to receive will be allocated sequentially
from these extra memory pools.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:40 +02:00
Viacheslav Ovsiienko
26cbb4191e app/testpmd: add multiple pools per core creation
The command line parameter --mbuf-size is updated, it can handle
the multiple values like the following:

--mbuf-size=2176,512,768,4096

specifying the creation the extra memory pools with the requested
mbuf data buffer sizes. If some buffer split feature is engaged
the extra memory pools can be used to configure the Rx queues
with rte_the_dev_rx_queue_setup_ex().

The extra pools are created with requested sizes, and pool names
are assigned with appended index: mbuf_pool_socket_%socket_%index.
Index zero is used to specify the first mandatory pool to maintain
compatibility with existing code.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:40 +02:00
Gregory Etelson
1b9f274623 app/testpmd: add commands for tunnel offload
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
 - apply matches to both outer and inner packet headers
   during entire offload procedure;
 - restore outer header of partially offloaded packet;
 - model is implemented as a set of helper functions.

Implementation details:

* Create application tunnel:
flow tunnel create <port> type <tunnel type>
On success, the command creates application tunnel object and returns
the tunnel descriptor. Tunnel descriptor is used in subsequent flow
creation commands to reference the tunnel.

* Create tunnel steering flow rule:
tunnel_set <tunnel descriptor> parameter used with steering rule
template.

* Create tunnel matching flow rule:
tunnel_match <tunnel descriptor> used with matching rule template.

* If tunnel steering rule was offloaded, outer header of a partially
offloaded packet is restored after miss.

Example:
test packet=
<Ether  dst=24:8a:07:8d:ae:d6 src=50:6b:4b:cc:fc:e2 type=IPv4 |
<IP  version=4 ihl=5 proto=udp src=1.1.1.1 dst=1.1.1.10 |
<UDP  sport=4789 dport=4789 len=58 chksum=0x7f7b |
<VXLAN  NextProtocol=Ethernet vni=0x0 |
<Ether  dst=24:aa:aa:aa:aa:d6 src=50:bb:bb:bb:bb:e2 type=IPv4 |
<IP  version=4 ihl=5 proto=icmp src=2.2.2.2 dst=2.2.2.200 |
<ICMP  type=echo-request code=0 chksum=0xf7ff id=0x0 seq=0x0 |>>>>>>>
>>> len(packet)
92

testpmd> flow flush 0
testpmd> port 0/queue 0: received 1 packets
src=50:6B:4B:CC:FC:E2 - dst=24:8A:07:8D:AE:D6 - type=0x0800 -
length=92

testpmd> flow tunnel 0 type vxlan
port 0: flow tunnel #1 type vxlan
testpmd> flow create 0 ingress group 0 tunnel_set 1
         pattern eth /ipv4 / udp dst is 4789 / vxlan / end
         actions  jump group 0 / end
Flow rule #0 created
testpmd> port 0/queue 0: received 1 packets
tunnel restore info: - vxlan tunnel - outer header present # <--
  src=50:6B:4B:CC:FC:E2 - dst=24:8A:07:8D:AE:D6 - type=0x0800 -
length=92

testpmd> flow create 0 ingress group 0 tunnel_match 1
         pattern eth / ipv4 / udp dst is 4789 / vxlan / eth / ipv4 /
         end
         actions set_mac_dst mac_addr 02:CA:FE:CA:FA:80 /
         queue index 0 / end
Flow rule #1 created
testpmd> port 0/queue 0: received 1 packets
  src=50:BB:BB:BB:BB:E2 - dst=02:CA:FE:CA:FA:80 - type=0x0800 -
length=42

* Destroy flow tunnel
flow tunnel destroy <port> id <tunnel id>

* Show existing flow tunnels
flow tunnel list <port>

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
2020-10-16 19:48:19 +02:00