Starting or stopping a bonded port also starts or stops all active slaves
under the bonded port. If this port is a bonded device, we need to modify
the port status of all slaves.
Fixes: 0e545d3047 ("app/testpmd: check stopping port is not in bonding")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
flow queue 0 indirect_action 0 create action_id 9
ingress postpone yes action rss / end
flow queue 0 indirect_action 0 update action_id 9
action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
testpmd> flow queue 0 create 0 postpone no
template_table 6 pattern_template 0 actions_template 0
pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
testpmd> flow queue 0 destroy 0 postpone yes rule 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
testpmd> flow template_table 0 create table_id 6
group 9 priority 4 ingress mode 1
rules_number 64 pattern_template 2 actions_template 4
testpmd> flow template_table 0 destroy table 6
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
testpmd> flow pattern_template 0 create pattern_template_id 2
template eth dst is 00:16:3e:31:15:c3 / end
testpmd> flow actions_template 0 create actions_template_id 4
template drop / end mask drop / end
testpmd> flow actions_template 0 destroy actions_template 4
testpmd> flow pattern_template 0 destroy pattern_template 2
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256
Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The first "set txtimes" command parameter specifies the time
interval between scheduled send bursts for single queue. This
interval should be the same for all the forwarding ports.
It requires to maintain the timing related variables on per
queue basis instead of per core, as currently implemented.
This resulted in wrong burst intervals if two or more cores
were generating the scheduled traffic for two or more ports
in txonly mode.
This patch moves the timing variable to the fstream structure.
Only txonly forwarding mode with enabled send scheduling is
affected.
Fixes: 4940344dab ("app/testpmd: add Tx scheduling command")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The current approach detects the proxy port on each port (re-)plug and
may spam the log with error messages if the PMD does not support flows.
As testpmd is a debug tool, it must not do such implicit port handling.
Instead, the new API should be called only when the user requests that.
Revoke the existing code. Implement an explicit command-line primitive
to let the user find the proxy port themselves. Provide relevant hints.
Fixes: 1179f05cc9 ("ethdev: query proxy port to manage transfer flows")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GRO and GSO integration in testpmd is relatively self contained and easy
to extract.
Those libraries can be made optional as they provide standalone
features.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
RX/TX functions (rte_eth_rx_burst/rte_eth_tx_burst) get 'nb_pkts'
argument, which specifies the maximum number to receive/transmit.
It can be 0..nb_pkts, meaning nb_pkts+1 options.
Testpmd can provide statistics of the burst sizes ('set
record-burst-stats on') by incrementing an array cell of index
<burst-size>. This array is mistakenly [MAX_PKT_BURST] size. Receiving
the maximum burst will cause out of bound write.
Enlarge the spread stats array by one cell to fix it.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.
It's suggested to use same number of Rx queues and polling cores.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Adds "--rxq-share=X" parameter to enable shared RxQ.
Rx queue is shared if device supports, otherwise fallback to standard
RxQ.
Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardware by default, it
can try to add the new protocol to port hardware.
Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.
Application must complete the following tasks to create a flow
rule that matches custom header:
1. Create flow item object in port hardware.
Application must provide custom header configuration to PMD.
PMD will use that configuration to create flex item object in
port hardware.
2. Create flex patterns to match. Flex pattern has a spec and a mask
components, like a regular flow item. Combined together, spec and mask
can target unique data sequence or a number of data sequences in the
custom header.
Flex patterns of the same flex item can have different lengths.
Flex pattern is identified by unique handler value.
3. Create a flow rule with a flex flow item that references
flow pattern.
Testpmd flex CLI commands are:
testpmd> flow flex_item create <port> <flex_id> <filename>
testpmd> set flex_pattern <pattern_id> \
spec <spec data> mask <mask data>
testpmd> set flex_pattern <pattern_id> is <spec_data>
testpmd> flow create <port> ... \
/ flex item is <flex_id> pattern is <pattern_id> / ...
The patch works with the jansson library API.
A new optional dependency on jansson library is added for
testpmd. If jansson not detected the flex item functionality
is disabled.
Jansson development files must be present:
jansson.pc, jansson.h libjansson.[a,so]
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
testpmd flow creation is constructed from these procedures:
1. receive string with flow rule description;
2. parse input string and build flow parameters: port_id value,
flow attributes, items array, actions array;
3. create a flow rule from flow rule parameters.
Flow rule creation procedures are built as a pipeline. A new
procedure starts immediately after successful predecessor completion.
Due to this we have no dedicated routines providing intermediate
results for step 1-3 above.
The patch adds `flow_parse()` function call. It parses input string
and provides a caller with parsed data. This is a preparation step
for introducing flex item command processing.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
The driver may change RSS hash offloads in dev->data->dev_conf
during dev_configure which may cause port->dev_conf and port->rx_conf
contain outdated values.
Since testpmd uses its configuration structures to display offloads
configuration, it doesn't display RSS hash offload.
This patch updates the testpmd offloads from device configuration
to fix this issue.
Fixes: ce8d561418 ("app/testpmd: add port configuration settings")
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add 'display-xstats' option for using in accompanying with Rx/Tx statistics
(i.e. 'stats-period' option or 'show port stats' interactive command) to
display specified list of extended statistics.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Not all DPDK ports in a given switching domain may have the
privilege to manage "transfer" flows. Add an API to find a
port with sufficient privileges by any port in the domain.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
For each forward engine, there may be some special conditions
must be met before the forwarding runs.
Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.
The best solution is each forward engine has a callback to check
whether these conditions are met, and then testpmd can call the
callback to determine whether the forwarding can be started.
There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates its
return value then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch calls the
callback before the forwarding stats is reset and then launches the
forwarding engine.
Bugzilla ID: 797
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
This patch adds multi-process support for testpmd.
For example the following commands run two testpmd
processes:
* the primary process:
./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=0
* the secondary process:
./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=1
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Make number of flows in flowgen configurable by setting parameter
--flowgen-flows=N.
Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Running with stdout suppressed or redirected for further processing
is very confusing in the case of errors. Fix it by logging errors and
warnings to stderr.
Since lines with log messages are touched anyway concatenate split
format strings to make it easier to search using grep.
Fix indent of format string arguments.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
- Make printf format OS independent
- Replace htons with RTE_BE16
- Replace POSIX specific inet_aton with OS independent inet_pton
- Replace sleep with rte_delay_us_sleep
- Replace random with rte_rand
- #ifndef mman related code for now
- Fix header inclusion
- Include rte_os_shim.h in testpmd.h
- Remove redundant headers
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Passing an uint32_t pointer to an enum pointer parameter causes
pointer-sign warning on Windows (converts between pointers to
integer types with different sign), since enum is implicitly
converted to int on Windows.
And the current enum pointer parameter of that function is actually
misleading and should be fixed as an uint32_t pointer parameter.
Fixes: b19da32e31 ("app/testpmd: add FEC command")
Cc: stable@dpdk.org
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
After DCB mode is configured, the operations of port stop and port start
change the value of the global variable "dcb_test", As a result, the
forwarding configuration from DCB to RSS mode, namely,
“dcb_fwd_config_setup()” to "rss_fwd_config_setup()".
Currently, the 'dcb_flag' field in struct 'rte_port' indicates whether
the port is configured with DCB. And it is sufficient to have
'dcb_config' as a global variable to control the DCB test status. So
this patch deletes the "dcb_test".
In addition, setting 'dcb_config' at the end of init_port_dcb_config()
in case that ports fail to enter DCB mode.
Fixes: 900550de04 ("app/testpmd: add dcb support")
Fixes: ce8d561418 ("app/testpmd: add port configuration settings")
Fixes: 7741e4cf16 ("app/testpmd: VMDq and DCB updates")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Add the create/del policy CLIs to support actions per color.
The CLIs are:
Create: add port meter policy (port_id) (policy_id) g_actions (actions)
y_actions (actions) r_actions (actions)
Delete: del port meter policy (port_id) (policy_id)
Examples:
testpmd> add port meter policy 0 1 g_actions rss / end y_actions end
r_actions drop / end
testpmd> del port meter policy 0 1
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add support for single flow dump.
The CLIs to dump one rule: flow dump PORT rule ID
to dump all: flow dump PORT all
Examples:
testpmd> flow dump 0 all
testpmd> flow dump 0 rule 0
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
1/ Improve portability by avoiding use of non-standard 'uint'.
Use uint8_t for hash_key_len as rss_key_len is a uint8_t type.
This solves following build error when building with musl libc:
app/test-pmd/testpmd.h:813:29: error: unknown type name 'uint'
2/ In musl libc, stdout is of type (FILE * const).
Because of the const qualifier, a dark magic cast
must be achieved through uintptr_t.
Fixes: 8205e241b2 ("app/testpmd: add missing type to RSS hash commands")
Fixes: e977e4199a ("app/testpmd: add commands to load/unload BPF filters")
Cc: stable@dpdk.org
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Add support for forced ethernet speed setting.
Currently testpmd tries to configure the Ethernet port in autoneg mode.
It is not possible to set the Ethernet port to a specific speed while
starting testpmd. In some cases capability to configure a forced speed
for the Ethernet port during initialization may be necessary. This patch
tries to add this support.
The patch assumes full duplex setting and does not attempt to change that.
So speeds like 10M, 100M are not configurable using this method.
The command line to configure a forced speed of 10G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 10000
The command line to configure a forced speed of 50G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 50000
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
"show port cap all|<port_id>" was to display offload configuration of
port(s).
But later two other commands added to show same information in more
accurate way:
show port (port_id) rx_offload configuration
show port (port_id) tx_offload configuration
These new commands can both show port and queue level configuration,
also with their capabilities counterparts easier to see offload
capability and configuration of the port in similar syntax.
So the functionality is duplicated and removing this version, to favor
the new commands.
Another problem with this command is it requires each new offload to be
added into the function to display them, and there were missing offloads
that are not displayed, this requirement for sure will create gaps by
time as new offloads added.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
The tx_queue member of the fwd_lcore struct is unused as it is already
part of the fwd_stream structure. Deleting helps improve code readability.
Signed-off-by: Kathleen Capella <kathleen.capella@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
When testing high performance numbers, it is often that CPU performance
limits the max values device can reach (both in pps and in gbps)
Here instead of recreating each packet separately, we use clones counter
to resend the same mbuf to the line multiple times.
PMDs handle that transparently due to reference counting inside of mbuf.
Reaching max PPS on small packet sizes helps here:
Some data from our 2 port x 50G device. Using 2*6 tx queues, 64b packets,
PowerEdge R7525, AMD EPYC 7452:
./build/app/dpdk-testpmd -l 32-63 -- --forward-mode=flowgen \
--rxq=6 --txq=6 --disable-crc-strip --burst=512 \
--flowgen-clones=0 --txd=4096 --stats-period=1 --txpkts=64
Gives ~46MPPS TX output:
Tx-pps: 22926849 Tx-bps: 11738590176
Tx-pps: 23642629 Tx-bps: 12105024112
Setting flowgen-clones to 512 pushes TX almost to our device
physical limit (68MPPS) using same 2*6 queues(cores):
Tx-pps: 34357556 Tx-bps: 17591073696
Tx-pps: 34353211 Tx-bps: 17588802640
Doing similar measurements per core, I see one core can do
6.9MPPS (without clones) vs 11MPPS (with clones)
Verified on Marvell qede and atlantic PMDs.
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
"port config all max-pkt-len" command fails because it doesn't set the
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag properly.
Commit in the fixes line moved the 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload
flag update from 'cmd_config_max_pkt_len_parsed()' to 'init_config()'.
'init_config()' function is only called during testpmd startup, but the
flag status needs to be calculated whenever 'max_rx_pkt_len' changes.
The issue can be reproduced as [1], where the 'max-pkt-len' reduced and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag should be cleared but it
didn't.
Adding the 'update_jumbo_frame_offload()' helper function to update
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag and 'max_rx_pkt_len'. This
function is called both by 'init_config()' and
'cmd_config_max_pkt_len_parsed()'.
Default 'max-pkt-len' value set to zero, 'update_jumbo_frame_offload()'
updates it to "RTE_ETHER_MTU + PMD specific Ethernet overhead" when it
is zero.
If '--max-pkt-len=N' argument provided, it will be used instead.
And with each "port config all max-pkt-len" command, the
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag, 'max-pkt-len' and MTU is
updated.
[1]
--------------------------------------------------------------------------
dpdk-testpmd -c 0xf -n 4 -- -i --max-pkt-len=9000 --tx-offloads=0x8000
--rxq=4 --txq=4 --disable-rss
testpmd> set verbose 3
testpmd> port stop all
testpmd> port config all max-pkt-len 1518
testpmd> port start all
// Got fail error info without this patch
Configuring Port 0 (socket 1)
Ethdev port_id=0 rx_queue_id=0, new added offloads 0x800 must be
within per-queue offload capabilities 0x0 in rte_eth_rx_queue_setup()
Fail to configure port 0 rx queues //<-- Fail error info;
--------------------------------------------------------------------------
Bugzilla ID: 625
Fixes: 761c4d6690 ("app/testpmd: fix max Rx packet length for VLAN packets")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
displayed.
3) The configuration result does not take effect or can not be queried
in real time.
4) The mapping arrays, "tx_queue_stats_mappings_array" &
"rx_queue_stats_mappings_array" are global and their sizes are based
on fixed max port and queue size assumptions.
5) These record structures, 'map_port_queue_stats_mapping_registers()'
and its sub functions are redundant for majority of drivers.
6) The display of the queue stats and queue stats mapping is mixed
together.
Since xstats is used to obtain queue statistics, we have made the
following simplifications and adjustments:
1) If PMD requires and supports queue stats mapping, configure to driver
in real time by calling ethdev API after executing the command "set
stat_qmap rx/tx ...". If not, the command can not be accepted.
2) Based on the above adjustments, these record structures,
'map_port_queue_stats_mapping_registers()' and its sub functions can
be removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping"
parameters, and 'parse_queue_stats_mapping_config()' can be removed
too.
3) remove display of queue stats mapping in 'fwd_stats_display()' &
'nic_stats_display()', and obtain queue stats by xstats. Since the
record structures are removed, 'nic_stats_mapping_display()' can be
deleted.
Fixes: 4dccdc789b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b6 ("app/testpmd: various updates")
Fixes: ed30d9b691 ("app/testpmd: add stats per queue")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When an age action becomes aged-out the next call for
rte_flow_get_aged_flows API should return the action context supplied
by the action configuration structure.
In case the age action is created by the shared action API, the shared
action context of the Testpmd application was not set.
In addition, the application handler of the contexts returned by the
rte_flow_get_aged_flows API didn't consider the fact that the action
could be set by the shared action API and considered it as regular flow
context.
This caused a crash in Testpmd when the context is parsed.
This patch set context type in the flow and shared action context and
uses it to parse the aged-out contexts correctly.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of FDIR filters RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Use the newer macros defined by meson in all DPDK source code, to ensure
there are no errors when the old non-standard macros are removed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
If Rx queue is configured with split feature the extended
setup with specified segment sizes and pool will be performed.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add command line parameter:
--rxoffs=X[,Y]
Sets the offsets of packet segments from the beginning of the
receiving buffer if split feature is engaged. Affects only the
queues configured with split offloads (currently BUFFER_SPLIT
is supported only).
Add interactive mode command, providing the same:
testpmd> set rxoffs (x[,y]*)
Where x[,y]* represents a CSV list of values, without white space.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add command line parameter:
--rxpkts=X[,Y]
Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only).
Add interactive mode command:
testpmd> set rxpkts (x[,y]*)
Where x[,y]* represents a CSV list of values, without white space.
Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only). Optionally the
multiple memory pools can be specified with --mbuf-size command line
parameter and the mbufs to receive will be allocated sequentially
from these extra memory pools.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The command line parameter --mbuf-size is updated, it can handle
the multiple values like the following:
--mbuf-size=2176,512,768,4096
specifying the creation the extra memory pools with the requested
mbuf data buffer sizes. If some buffer split feature is engaged
the extra memory pools can be used to configure the Rx queues
with rte_the_dev_rx_queue_setup_ex().
The extra pools are created with requested sizes, and pool names
are assigned with appended index: mbuf_pool_socket_%socket_%index.
Index zero is used to specify the first mandatory pool to maintain
compatibility with existing code.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>