Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
flow queue 0 indirect_action 0 create action_id 9
ingress postpone yes action rss / end
flow queue 0 indirect_action 0 update action_id 9
action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
testpmd> flow queue 0 create 0 postpone no
template_table 6 pattern_template 0 actions_template 0
pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
testpmd> flow queue 0 destroy 0 postpone yes rule 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
testpmd> flow template_table 0 create table_id 6
group 9 priority 4 ingress mode 1
rules_number 64 pattern_template 2 actions_template 4
testpmd> flow template_table 0 destroy table 6
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
testpmd> flow pattern_template 0 create pattern_template_id 2
template eth dst is 00:16:3e:31:15:c3 / end
testpmd> flow actions_template 0 create actions_template_id 4
template drop / end mask drop / end
testpmd> flow actions_template 0 destroy actions_template 4
testpmd> flow pattern_template 0 destroy pattern_template 2
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256
Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add gre_option command for matching optional fields
(checksum/key/sequence) in GRE header. The item must follow gre item,
and the item does not change the flags in gre item, the application
should set the flags in gre item correspondingly.
Application can still use gre_key item 'gre_key value is xx' for key
matching, the effect is the same with using 'gre_option key is xx'.
The examples for gre_option are as follows:
To match on checksum field with value 0x11:
testpmd> ... pattern / eth / gre c_bit is 1 / gre_option checksum is
0x11 / end ..
To match on checksum field with value 0x11 and any value of key:
testpmd> ... pattern / eth / gre c_bit is 1 k_bit is 1 / gre_option
checksum is 0x11 / end ..
To match on checksum field with value 0x11 and no key field in packet:
testpmd> ... pattern / eth / gre c_bit is 1 k_bit is 0 / gre_option
checksum is 0x11 / end ..
The invalid patterns for gre_option are as follows:
testpmd> ... pattern / eth / gre / gre_option checksum is 0x11 / end ..
(c_bit in gre item not present)
testpmd> ... pattern / eth / gre c_bit is 0 / gre_option checksum is 0x11 /
end .. (c_bit is unset for gre item, but checksum is
specified by gre_option item)
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
This patch adds L2TPv2 control message and 5 types of data message
support for testpmd.
The added L2TPv2 message types are listed below:
1. L2TPv2 control
2. L2TPv2
3. L2TPv2 + length option
4. L2TPv2 + sequence option
5. L2TPv2 + offset option
6. L2TPv2 + length option + sequence option
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
This patch defines new RSS offload type for L2TPv2, which
is required when users want to distribute packets based on
the L2TPv2 session ID field.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Patch adds command line options to configure queue based
priority flow control.
- Syntax command is given as below:
set pfc_queue_ctrl <port_id> rx <on|off> <tx_qid> <tx_tc> \
tx <on|off> <rx_qid> <rx_tc> <pause_time>
- Example command to configure queue based priority flow control
on rx and tx side for port 0, Rx queue 0, Tx queue 0 with pause
time 2047
testpmd> set pfc_queue_ctrl 0 rx on 0 0 tx on 0 0 2047
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch enables method to provide key and mask for raw rules
to be provided as hexadecimal values. There is new parameter
pattern_mask added to support this.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The current approach detects the proxy port on each port (re-)plug and
may spam the log with error messages if the PMD does not support flows.
As testpmd is a debug tool, it must not do such implicit port handling.
Instead, the new API should be called only when the user requests that.
Revoke the existing code. Implement an explicit command-line primitive
to let the user find the proxy port themselves. Provide relevant hints.
Fixes: 1179f05cc9a0 ("ethdev: query proxy port to manage transfer flows")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Adds "--rxq-share=X" parameter to enable shared RxQ.
Rx queue is shared if device supports, otherwise fallback to standard
RxQ.
Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add support for test-pmd to parse protocol pattern L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardware by default, it
can try to add the new protocol to port hardware.
Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.
Application must complete the following tasks to create a flow
rule that matches custom header:
1. Create flow item object in port hardware.
Application must provide custom header configuration to PMD.
PMD will use that configuration to create flex item object in
port hardware.
2. Create flex patterns to match. Flex pattern has a spec and a mask
components, like a regular flow item. Combined together, spec and mask
can target unique data sequence or a number of data sequences in the
custom header.
Flex patterns of the same flex item can have different lengths.
Flex pattern is identified by unique handler value.
3. Create a flow rule with a flex flow item that references
flow pattern.
Testpmd flex CLI commands are:
testpmd> flow flex_item create <port> <flex_id> <filename>
testpmd> set flex_pattern <pattern_id> \
spec <spec data> mask <mask data>
testpmd> set flex_pattern <pattern_id> is <spec_data>
testpmd> flow create <port> ... \
/ flex item is <flex_id> pattern is <pattern_id> / ...
The patch works with the jansson library API.
A new optional dependency on jansson library is added for
testpmd. If jansson not detected the flex item functionality
is disabled.
Jansson development files must be present:
jansson.pc, jansson.h libjansson.[a,so]
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add 'display-xstats' option for using in accompanying with Rx/Tx statistics
(i.e. 'stats-period' option or 'show port stats' interactive command) to
display specified list of extended statistics.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
For use in "transfer" flows. Supposed to send matching traffic to the
entity represented by the given ethdev, at embedded switch level.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to send matching traffic to
the given ethdev (to the application), at embedded switch level.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic entering the
embedded switch from the entity represented by the given ethdev.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic
entering the embedded switch from the given ethdev.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add a new cmdline to help diagnostic the bonding mode 4 in testpmd.
Show the lacp information about the bonded device and its slaves:
show bonding lacp info <bonded device port_id>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This updates the gtp_psc flow item to use the net header
definition of the gtp_psc to be based on RFC 38415-g30
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds multi-process support for testpmd.
For example the following commands run two testpmd
processes:
* the primary process:
./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=0
* the secondary process:
./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=1
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Make number of flows in flowgen configurable by setting parameter
--flowgen-flows=N.
Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.
Cc: stable@dpdk.org
Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Currently action RTE_FLOW_ACTION_TYPE_METER_COLOR is defined.
Add the CLI for this action: color type (types)
There are three types: green, yellow and red.
Example for the new policy meter CLIs:
add port meter policy 0 1 g_actions color type green / end y_actions
color type yellow / end r_actions color type red / end
In the above command, the action type is
RTE_FLOW_ACTION_TYPE_METER_COLOR, the meter policy action list:
green -> green, yellow -> yellow, red -> red.
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch supports the query of the link flow control parameter
on a port.
The command format is as follows:
show port <port_id> flow_ctrl
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Add the create/del policy CLIs to support actions per color.
The CLIs are:
Create: add port meter policy (port_id) (policy_id) g_actions (actions)
y_actions (actions) r_actions (actions)
Delete: del port meter policy (port_id) (policy_id)
Examples:
testpmd> add port meter policy 0 1 g_actions rss / end y_actions end
r_actions drop / end
testpmd> del port meter policy 0 1
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, the flow meter policy does not support multiple actions
per color; also the allowed action types per color are very limited.
In addition, the policy cannot be pre-defined.
Due to the growing in flow actions offload abilities there is a potential
for the user to use variety of actions per color differently.
This new meter policy API comes to allow this potential in the most ethdev
common way using rte_flow action definition.
A list of rte_flow actions will be provided by the user per color
in order to create a meter policy.
In addition, the API forces to pre-define the policy before
the meters creation in order to allow sharing of single policy
with multiple meters efficiently.
meter_policy_id is added into struct rte_mtr_params.
So that it can get the policy during the meters creation.
Allow coloring the packet using a new rte_flow_action_color
as could be done by the old policy API.
Add two common policy template as macros in the head file.
The next API function were added:
- rte_mtr_meter_policy_add
- rte_mtr_meter_policy_delete
- rte_mtr_meter_policy_update
- rte_mtr_meter_policy_validate
The next struct was changed:
- rte_mtr_params
- rte_mtr_capabilities
The next API was deleted:
- rte_mtr_policer_actions_update
To support this API the following app were changed:
app/test-flow-perf: clean meter policer
app/testpmd: clean meter policer
To support this API the following drivers were changed:
net/softnic: support meter policy API
1. Cleans meter rte_mtr_policer_action.
2. Supports policy API to get color action as policer action did.
The color action will be mapped into rte_table_action_policer.
net/mlx5: clean meter creation management
Cleans and breaks part of the current meter management
in order to allow better design with policy API.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
Usage example:
match that packet integrity checks are OK. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
Match that L4 packet is OK - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The "--tx-queue-stats-mapping" and "--rx-queue-stats-mapping"
and display and clear of "stats_map" have been removed from
testpmd.
This patch deletes some descriptions about queue stats mapping
in testpmd doc.
Fixes: 08dcd1870686 ("app/testpmd: fix queue stats mapping configuration")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
'qfi' field is 8 bits which represent single bit for
PPP (paging Policy Presence) single bit for RQI
(Reflective QoS Indicator) and 6 bits for QFI
(QoS Flow Identifier)
This is based on RFC 38415-g30
https://www.3gpp.org/ftp/Specs/archive/38_series/38.415/38415-g30.zip
Updated the doxygen comment and the mask for 'qfi'
to properly identify the full 8 bits of the field.
note: changing the default mask would cause different
patterns generated by testpmd.
Fixes: 346553db5bd1 ("ethdev: add GTP extension header to flow API")
Cc: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for single flow dump.
The CLIs to dump one rule: flow dump PORT rule ID
to dump all: flow dump PORT all
Examples:
testpmd> flow dump 0 all
testpmd> flow dump 0 rule 0
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
add meter profile packet_mode to the ethernet device.
One example:
add port meter profile rfc2697 (port_id) (profile_id)
(cir) (cbs) (ebs) (packet_mode)
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add support for rte_flow_action_nvge_encap as a sample action.
The example of test-pmd command:
1. set nvgre ip-version ... tni ... ip-src ... ip-dst ...
set raw_encap 1 eth src... / ipv4... /...
set sample_actions 2 nvgre / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets
using NVGRE encapsulation data and sent to wire.
Signed-off-by: Salem Sol <salems@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for rte_flow_action_vxlan_encap as a sample action.
The example of test-pmd command:
1. set vxlan ip-version ... vni ... udp-src ...
set raw_encap 1 eth src.../ ipv4.../...
set sample_actions 2 vxlan_encap / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets
using VXLAN encapsulation data and sent to wire.
Signed-off-by: Salem Sol <salems@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update documentation for sample action usage in testpmd and
show the command line example.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for forced ethernet speed setting.
Currently testpmd tries to configure the Ethernet port in autoneg mode.
It is not possible to set the Ethernet port to a specific speed while
starting testpmd. In some cases capability to configure a forced speed
for the Ethernet port during initialization may be necessary. This patch
tries to add this support.
The patch assumes full duplex setting and does not attempt to change that.
So speeds like 10M, 100M are not configurable using this method.
The command line to configure a forced speed of 10G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 10000
The command line to configure a forced speed of 50G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 50000
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
"show port cap all|<port_id>" was to display offload configuration of
port(s).
But later two other commands added to show same information in more
accurate way:
show port (port_id) rx_offload configuration
show port (port_id) tx_offload configuration
These new commands can both show port and queue level configuration,
also with their capabilities counterparts easier to see offload
capability and configuration of the port in similar syntax.
So the functionality is duplicated and removing this version, to favor
the new commands.
Another problem with this command is it requires each new offload to be
added into the function to display them, and there were missing offloads
that are not displayed, this requirement for sure will create gaps by
time as new offloads added.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>