Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Adds "--rxq-share=X" parameter to enable shared RxQ.
Rx queue is shared if device supports, otherwise fallback to standard
RxQ.
Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add support for test-pmd to parse protocol pattern L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardware by default, it
can try to add the new protocol to port hardware.
Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.
Application must complete the following tasks to create a flow
rule that matches custom header:
1. Create flow item object in port hardware.
Application must provide custom header configuration to PMD.
PMD will use that configuration to create flex item object in
port hardware.
2. Create flex patterns to match. Flex pattern has a spec and a mask
components, like a regular flow item. Combined together, spec and mask
can target unique data sequence or a number of data sequences in the
custom header.
Flex patterns of the same flex item can have different lengths.
Flex pattern is identified by unique handler value.
3. Create a flow rule with a flex flow item that references
flow pattern.
Testpmd flex CLI commands are:
testpmd> flow flex_item create <port> <flex_id> <filename>
testpmd> set flex_pattern <pattern_id> \
spec <spec data> mask <mask data>
testpmd> set flex_pattern <pattern_id> is <spec_data>
testpmd> flow create <port> ... \
/ flex item is <flex_id> pattern is <pattern_id> / ...
The patch works with the jansson library API.
A new optional dependency on jansson library is added for
testpmd. If jansson not detected the flex item functionality
is disabled.
Jansson development files must be present:
jansson.pc, jansson.h libjansson.[a,so]
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add 'display-xstats' option for using in accompanying with Rx/Tx statistics
(i.e. 'stats-period' option or 'show port stats' interactive command) to
display specified list of extended statistics.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
For use in "transfer" flows. Supposed to send matching traffic to the
entity represented by the given ethdev, at embedded switch level.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to send matching traffic to
the given ethdev (to the application), at embedded switch level.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic entering the
embedded switch from the entity represented by the given ethdev.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic
entering the embedded switch from the given ethdev.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add a new cmdline to help diagnostic the bonding mode 4 in testpmd.
Show the lacp information about the bonded device and its slaves:
show bonding lacp info <bonded device port_id>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This updates the gtp_psc flow item to use the net header
definition of the gtp_psc to be based on RFC 38415-g30
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds multi-process support for testpmd.
For example the following commands run two testpmd
processes:
* the primary process:
./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=0
* the secondary process:
./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=1
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Make number of flows in flowgen configurable by setting parameter
--flowgen-flows=N.
Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.
Cc: stable@dpdk.org
Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Currently action RTE_FLOW_ACTION_TYPE_METER_COLOR is defined.
Add the CLI for this action: color type (types)
There are three types: green, yellow and red.
Example for the new policy meter CLIs:
add port meter policy 0 1 g_actions color type green / end y_actions
color type yellow / end r_actions color type red / end
In the above command, the action type is
RTE_FLOW_ACTION_TYPE_METER_COLOR, the meter policy action list:
green -> green, yellow -> yellow, red -> red.
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch supports the query of the link flow control parameter
on a port.
The command format is as follows:
show port <port_id> flow_ctrl
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Add the create/del policy CLIs to support actions per color.
The CLIs are:
Create: add port meter policy (port_id) (policy_id) g_actions (actions)
y_actions (actions) r_actions (actions)
Delete: del port meter policy (port_id) (policy_id)
Examples:
testpmd> add port meter policy 0 1 g_actions rss / end y_actions end
r_actions drop / end
testpmd> del port meter policy 0 1
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, the flow meter policy does not support multiple actions
per color; also the allowed action types per color are very limited.
In addition, the policy cannot be pre-defined.
Due to the growing in flow actions offload abilities there is a potential
for the user to use variety of actions per color differently.
This new meter policy API comes to allow this potential in the most ethdev
common way using rte_flow action definition.
A list of rte_flow actions will be provided by the user per color
in order to create a meter policy.
In addition, the API forces to pre-define the policy before
the meters creation in order to allow sharing of single policy
with multiple meters efficiently.
meter_policy_id is added into struct rte_mtr_params.
So that it can get the policy during the meters creation.
Allow coloring the packet using a new rte_flow_action_color
as could be done by the old policy API.
Add two common policy template as macros in the head file.
The next API function were added:
- rte_mtr_meter_policy_add
- rte_mtr_meter_policy_delete
- rte_mtr_meter_policy_update
- rte_mtr_meter_policy_validate
The next struct was changed:
- rte_mtr_params
- rte_mtr_capabilities
The next API was deleted:
- rte_mtr_policer_actions_update
To support this API the following app were changed:
app/test-flow-perf: clean meter policer
app/testpmd: clean meter policer
To support this API the following drivers were changed:
net/softnic: support meter policy API
1. Cleans meter rte_mtr_policer_action.
2. Supports policy API to get color action as policer action did.
The color action will be mapped into rte_table_action_policer.
net/mlx5: clean meter creation management
Cleans and breaks part of the current meter management
in order to allow better design with policy API.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
Usage example:
match that packet integrity checks are OK. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
Match that L4 packet is OK - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The "--tx-queue-stats-mapping" and "--rx-queue-stats-mapping"
and display and clear of "stats_map" have been removed from
testpmd.
This patch deletes some descriptions about queue stats mapping
in testpmd doc.
Fixes: 08dcd18706 ("app/testpmd: fix queue stats mapping configuration")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
'qfi' field is 8 bits which represent single bit for
PPP (paging Policy Presence) single bit for RQI
(Reflective QoS Indicator) and 6 bits for QFI
(QoS Flow Identifier)
This is based on RFC 38415-g30
https://www.3gpp.org/ftp/Specs/archive/38_series/38.415/38415-g30.zip
Updated the doxygen comment and the mask for 'qfi'
to properly identify the full 8 bits of the field.
note: changing the default mask would cause different
patterns generated by testpmd.
Fixes: 346553db5b ("ethdev: add GTP extension header to flow API")
Cc: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for single flow dump.
The CLIs to dump one rule: flow dump PORT rule ID
to dump all: flow dump PORT all
Examples:
testpmd> flow dump 0 all
testpmd> flow dump 0 rule 0
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
add meter profile packet_mode to the ethernet device.
One example:
add port meter profile rfc2697 (port_id) (profile_id)
(cir) (cbs) (ebs) (packet_mode)
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add support for rte_flow_action_nvge_encap as a sample action.
The example of test-pmd command:
1. set nvgre ip-version ... tni ... ip-src ... ip-dst ...
set raw_encap 1 eth src... / ipv4... /...
set sample_actions 2 nvgre / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets
using NVGRE encapsulation data and sent to wire.
Signed-off-by: Salem Sol <salems@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for rte_flow_action_vxlan_encap as a sample action.
The example of test-pmd command:
1. set vxlan ip-version ... vni ... udp-src ...
set raw_encap 1 eth src.../ ipv4.../...
set sample_actions 2 vxlan_encap / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets
using VXLAN encapsulation data and sent to wire.
Signed-off-by: Salem Sol <salems@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update documentation for sample action usage in testpmd and
show the command line example.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for forced ethernet speed setting.
Currently testpmd tries to configure the Ethernet port in autoneg mode.
It is not possible to set the Ethernet port to a specific speed while
starting testpmd. In some cases capability to configure a forced speed
for the Ethernet port during initialization may be necessary. This patch
tries to add this support.
The patch assumes full duplex setting and does not attempt to change that.
So speeds like 10M, 100M are not configurable using this method.
The command line to configure a forced speed of 10G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 10000
The command line to configure a forced speed of 50G:
dpdk-testpmd -c 0xff -- -i --eth-link-speed 50000
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
"show port cap all|<port_id>" was to display offload configuration of
port(s).
But later two other commands added to show same information in more
accurate way:
show port (port_id) rx_offload configuration
show port (port_id) tx_offload configuration
These new commands can both show port and queue level configuration,
also with their capabilities counterparts easier to see offload
capability and configuration of the port in similar syntax.
So the functionality is duplicated and removing this version, to favor
the new commands.
Another problem with this command is it requires each new offload to be
added into the function to display them, and there were missing offloads
that are not displayed, this requirement for sure will create gaps by
time as new offloads added.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Add support for displaying the count of used (filled by hardware
but not yet processed by the driver) descriptors on a receive
queue in order to allow the rte_eth_dev rx_queue_count() API to
be exercised and tested.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
In the Testpmd Flow rules management section, correct
the TPID values in the Sample QinQ flow rules sub section.
Also replace the keyword qinq_strip with extend in the
vlan set command.
Fixes: bef3bfe7d5 ("doc: revise sample testpmd flow commands")
Cc: stable@dpdk.org
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
When testing high performance numbers, it is often that CPU performance
limits the max values device can reach (both in pps and in gbps)
Here instead of recreating each packet separately, we use clones counter
to resend the same mbuf to the line multiple times.
PMDs handle that transparently due to reference counting inside of mbuf.
Reaching max PPS on small packet sizes helps here:
Some data from our 2 port x 50G device. Using 2*6 tx queues, 64b packets,
PowerEdge R7525, AMD EPYC 7452:
./build/app/dpdk-testpmd -l 32-63 -- --forward-mode=flowgen \
--rxq=6 --txq=6 --disable-crc-strip --burst=512 \
--flowgen-clones=0 --txd=4096 --stats-period=1 --txpkts=64
Gives ~46MPPS TX output:
Tx-pps: 22926849 Tx-bps: 11738590176
Tx-pps: 23642629 Tx-bps: 12105024112
Setting flowgen-clones to 512 pushes TX almost to our device
physical limit (68MPPS) using same 2*6 queues(cores):
Tx-pps: 34357556 Tx-bps: 17591073696
Tx-pps: 34353211 Tx-bps: 17588802640
Doing similar measurements per core, I see one core can do
6.9MPPS (without clones) vs 11MPPS (with clones)
Verified on Marvell qede and atlantic PMDs.
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch defines new RSS offload types for MPLS. The distribution
will on the basis of MPLS tag.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The patch adds the GENEVE option rte flow item support to
command line interpreter. The flow command with GENEVE
option items looks like:
flow create 0 ingress pattern eth / ipv4 / udp / geneve vni is 100 /
geneve-opt class is 99 length is 1 type is 0 data is 0x669988 /
end actions drop / end
The option length should be specified in 32-bit words, this
value specifies the length of the data pattern/mask arrays (should be
multiplied by sizeof(uint32_t) to be expressed in bytes. If match
on the length itself is not needed the mask should be set to zero, in
this case length is used to specify the pattern/mask array lengths only.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add new UDP tunnel port params for eCPRI configuration, the command
as below:
testpmd> port config 0 udp_tunnel_port add ecpri 6789
testpmd> port config 0 udp_tunnel_port rm ecpri 6789
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch defines new RSS offload types for eCPRI. For eCPRI with
Message Type 0, the hash field is physical channel ID.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This attribute helps PMDs to tell actions supposed to work
on the so-called hardware e-switch level from regular ones.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
The command uses FDIR filter information get API which
is not supported any more.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of FDIR filters RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Global filter configuration request was supported by net/i40e
driver only to configure GRE key length.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of L2 tunnel filter RTE flow API should be used.
Preserve RTE_ETH_FILTER_L2_TUNNEL since it is used in drivers
internally in RTE flow API support.
rte_eth_l2_tunnel_conf structure is used in other ethdev API
functions.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>