Commit Graph

4631 Commits

Author SHA1 Message Date
Min Hu (Connor)
cdf6a5fbc5 doc: add runtime option examples to hns3 guide
This patch added examples for runtime config options, to help user
how to use this.

Fixes: a124f9e959 ("net/hns3: add runtime config to select IO burst function")
Fixes: 7079121324 ("net/hns3: support masking device capability")

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-04-26 18:04:08 +02:00
Chaoyong He
8202309f9f doc: fix multiport syntax in nfp guide
Fix up the suffix of the PCI ID to be consistent with the code.

Fixes: 979f2bae07 ("doc: improve multiport PF in nfp guide")
Cc: stable@dpdk.org

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
2021-04-26 11:39:08 +02:00
Shijith Thotton
8a3d58c189 event/cnxk: add option to control timer adapters
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.

Example:
	--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
2021-05-04 08:45:52 +02:00
Shijith Thotton
853623b9e5 event/cnxk: add timer stats
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.

Example:
	--dev "0002:1e:00.0,tim_stats_ena=1"

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
2021-05-04 08:31:48 +02:00
Shijith Thotton
a66fa85668 event/cnxk: add options for timer chunk size and rings
Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.

Example:
	--dev "0002:1e:00.0,tim_chnk_slots=1024"
	--dev "0002:1e:00.0,tim_rings_lmt=4"

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
2021-05-04 07:56:05 +02:00
Pavan Nikhilesh
1b06a817b8 event/cnxk: add option to disable NPA
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.

Example:
	--dev "0002:0e:00.0,tim_disable_npa=1"

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-05-04 07:35:00 +02:00
Shijith Thotton
3d9a7181e4 event/cnxk: support timer
Add event timer adapter a.k.a TIM initialization on SSO probe.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
2021-05-04 07:13:54 +02:00
Pavan Nikhilesh
7ffa737996 event/cnxk: add option to configure getwork mode
Add devargs to configure the platform specific getwork mode.

CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
	--dev "0002:0e:00.0,single_ws=1"

CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
	--dev "0002:1e:00.0,gw_mode=1"

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
2021-05-04 06:17:19 +02:00
Shijith Thotton
38c2e3240b event/cnxk: add option to control SSO HWGRP QoS
SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.

Example:
        --dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]

Qx  -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.

The values need to be expressed in terms of percentages, 0 represents
default.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
2021-05-04 05:56:16 +02:00
Shijith Thotton
e656d40fd1 event/cnxk: add option for in-flight buffer count
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.

Example:
        --dev "0002:0e:00.0,xae_cnt=8192"

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2021-05-04 05:49:19 +02:00
Pavan Nikhilesh
8558dcaa05 event/cnxk: add build infra and device setup
Add meson build infra structure along with the event device
SSO initialization and teardown functions.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
2021-05-04 05:00:18 +02:00
Timothy McDaniel
7c6cc633fc doc: update guide for DLB v2.5
Update the dlb documentation for v2.5. Notable differences include
the new cobined credit scheme. Also cleaned up a couple of sections,
and removed a duplicate section.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
2021-05-03 11:46:31 +02:00
Bruce Richardson
245efe544d raw/ioat: report status of completed jobs
Add improved error handling to rte_ioat_completed_ops(). This patch adds
new parameters to the function to enable the user to track the completion
status of each individual operation in a batch. With this addition, the
function can help the user to determine firstly, how many operations may
have failed or been skipped and then secondly, which specific operations
did not complete successfully.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2021-05-04 17:43:50 +02:00
Bruce Richardson
b7aaf417f9 raw/ioat: add bus driver for device scanning automatically
Rather than using a vdev with args, DPDK can scan and initialize the
devices automatically using a bus-type driver. This bus does not need to
worry about registering device drivers, rather it can initialize the
devices directly on probe.

The device instances (queues) to use are detected from /dev with the
additional info about them got from /sys.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2021-05-04 17:29:06 +02:00
Bruce Richardson
2ef79bea8f raw/ioat: support limiting queues for idxd PCI device
When using a full device instance via vfio, allow the user to specify a
maximum number of queues to configure rather than always using the max
number of supported queues.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2021-05-04 17:04:46 +02:00
Stanislaw Kardach
1abb185d6c stack: allow lock-free only on relevant architectures
Since commit 7911ba0473 ("stack: enable lock-free implementation for
aarch64"), lock-free stack is supported on arm64 but this description was
missing from the doxygen for the flag.

Currently it is impossible to detect programmatically whether lock-free
implementation of rte_stack is supported. One could check whether the
header guard for lock-free stubs is defined (_RTE_STACK_LF_STUBS_H_) but
that's an unstable implementation detail. Because of that currently all
lock-free ring creations silently succeed (as long as the stack header
is 16B long) which later leads to push and pop operations being NOPs.
The observable effect is that stack_lf_autotest fails on platforms not
supporting the lock-free. Instead it should just skip the lock-free test
altogether.

This commit adds a new errno value (ENOTSUP) that may be returned by
rte_stack_create() to indicate that a given combination of flags is not
supported on a current platform.
This is detected by checking a compile-time flag in the include logic in
rte_stack_lf.h which may be used by applications to check the lock-free
support at compile time.

Use the added RTE_STACK_LF_SUPPORTED flag to disable the lock-free stack
tests at the compile time.
Perf test doesn't fail because rte_ring_create() succeeds, however
marking this test as skipped gives a better indication of what actually
was tested.

Fixes: 7911ba0473 ("stack: enable lock-free implementation for aarch64")

Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2021-05-03 18:46:15 +02:00
Thomas Monjalon
c6bd9f4b28 doc: announce support of Alpine Linux
After many patches in several releases to make DPDK buildable with musl,
and few adjustments for busybox, it is time to show the support of DPDK
built in Alpine Linux.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-22 00:14:11 +02:00
Thomas Monjalon
b7fe612ac1 doc: fix names of UIO drivers
Fix typos in the names of kernel drivers based on UIO,
and make sure the generic term for the interface is UIO in capitals.

Fixes: 3a78b2f732 ("doc: add virtio crypto PMD guide")
Fixes: 3cc4d996fa ("doc: update VFIO usage in qat crypto guide")
Fixes: 39922c470e ("doc: add known uio_pci_generic issue for i40e")
Fixes: 86fa6c57a1 ("doc: add known igb_uio issue for i40e")
Fixes: beff6d8e8e ("net/netvsc: add documentation")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-22 00:06:47 +02:00
Dmitry Kozlyuk
b5674be414 net/pcap: build on Windows
Implement OS-dependent functions and enable build for Windows.
Account for different library name in Windows libpcap distributions.

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
2021-04-21 23:48:38 +02:00
Huisong Li
f4367c0b97 app/testpmd: show link flow control info
This patch supports the query of the link flow control parameter
on a port.

The command format is as follows:
show port <port_id> flow_ctrl

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
2021-04-21 13:43:41 +02:00
Chengwen Feng
bafe8a68f0 app/testpmd: support cleanup Tx queue mbufs
This patch supports cleanup txq mbufs command:
port cleanup (port_id) txq (queue_id) (free_cnt)

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-04-21 13:25:19 +02:00
Haifei Luo
f29fa2c59b app/testpmd: support policy actions per color
Add the create/del policy CLIs to support actions per color.
The CLIs are:
Create:  add port meter policy (port_id) (policy_id) g_actions (actions)
y_actions (actions) r_actions (actions)
Delete:  del port meter policy (port_id) (policy_id)

Examples:
testpmd> add port meter policy 0 1 g_actions rss / end y_actions end
r_actions drop / end
testpmd> del port meter policy 0 1

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-04-21 12:22:18 +02:00
Li Zhang
5f0d54f372 ethdev: add pre-defined meter policy API
Currently, the flow meter policy does not support multiple actions
per color; also the allowed action types per color are very limited.
In addition, the policy cannot be pre-defined.

Due to the growing in flow actions offload abilities there is a potential
for the user to use variety of actions per color differently.
This new meter policy API comes to allow this potential in the most ethdev
common way using rte_flow action definition.
A list of rte_flow actions will be provided by the user per color
in order to create a meter policy.
In addition, the API forces to pre-define the policy before
the meters creation in order to allow sharing of single policy
with multiple meters efficiently.

meter_policy_id is added into struct rte_mtr_params.
So that it can get the policy during the meters creation.

Allow coloring the packet using a new rte_flow_action_color
as could be done by the old policy API.

Add two common policy template as macros in the head file.

The next API function were added:
- rte_mtr_meter_policy_add
- rte_mtr_meter_policy_delete
- rte_mtr_meter_policy_update
- rte_mtr_meter_policy_validate
The next struct was changed:
- rte_mtr_params
- rte_mtr_capabilities
The next API was deleted:
- rte_mtr_policer_actions_update

To support this API the following app were changed:
app/test-flow-perf: clean meter policer
app/testpmd: clean meter policer

To support this API the following drivers were changed:
net/softnic: support meter policy API
1. Cleans meter rte_mtr_policer_action.
2. Supports policy API to get color action as policer action did.
   The color action will be mapped into rte_table_action_policer.

net/mlx5: clean meter creation management
Cleans and breaks part of the current meter management
in order to allow better design with policy API.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-04-21 12:22:17 +02:00
Li Zhang
c99b4f8bc2 net/mlx5: support ASO meter action
When ASO action is available, use it as the meter action

Signed-off-by: Shun Hao <shunh@nvidia.com>
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:08 +02:00
Li Zhang
5df35867d9 net/mlx5: optimize meter statistics
Meter statistics are each policer action each counter.
Totally 4 counters per each meter.
It causes cache missed
and lead to data forwarding performance low.

To optimize it, support pass counter for green
and drop counter for red.
Totally two counters per each meter.
Also use the global drop statistics for
all meter drop action.

Limitations as below:
1. It does not support yellow counter and return 0.
2. All the meter colors with drop action will be
   counted only by the global drop statistics.
3. Red color must be with drop action.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:27:49 +02:00
Min Hu (Connor)
8e321f7df1 doc: fix Rx burst function in hns3 guide
The patch 'net/hns3: rename Rx burst function' changed `simple'
Rx function name from 'scalar' to 'scalar simple', but doc
ignored that.

This patch fixed it.

Fixes: aa5baf47e1 ("net/hns3: rename Rx burst function")
Cc: stable@dpdk.org

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-04-20 12:55:28 +02:00
Bing Zhao
4d07cbefe3 app/testpmd: add commands for conntrack
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
  set conntrack com peer ...
  set conntrack orig scale ...
  set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".

After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-04-20 01:24:57 +02:00
Bing Zhao
9847fd125d ethdev: introduce conntrack flow action and item
This commit introduces the conntrack action and item.

Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.

The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.

A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.

The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.

Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.

For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.

Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
        netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
        nf_conntrack_proto_tcp.c

Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-20 01:24:57 +02:00
Robin Zhang
e9c5672ac1 net/iavf: deprecate i40evf PMD
The i40evf PMD will be deprecated, iavf will be the only VF driver for
Intel 700 serial (i40e) NIC family.

To reach this, there will be 2 steps:

Step 1: iavf will be the default VF driver, while i40evf still can be
selected by devarg: "driver=i40evf".
This is covered by this patch, which include:
1) add all 700 serial NIC VF device ID into iavf PMD
2) skip probe if devargs contain "driver=i40evf" in iavf
3) continue probe if devargs contain "driver=i40evf" in i40evf

Step 2: i40evf and related devarg are removed, this will happen at DPDK
21.11

Between step 1 and step 2, no new feature will be added into i40evf
except bug fix.

Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2021-04-19 10:36:17 +02:00
Chengwen Feng
7079121324 net/hns3: support masking device capability
This patch supports runtime config of mask device capability, it was
used to mask the capability which queried from firmware.

The device argument key is "dev_caps_mask" which takes hexadecimal
bitmask where each bit represents whether mask corresponding capability.

Its main purpose is to debug and avoid problems.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-04-19 19:15:45 +02:00
Ori Kam
0797fa6ccf app/testpmd: support integrity flow item
The integrity item allows the application to match
on the integrity of a packet.

Usage example:
match that packet integrity checks are OK. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01

Match that L4 packet is OK - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe

Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-04-19 19:05:17 +02:00
Ori Kam
b10a421a1f ethdev: add packet integrity check flow rules
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.

However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.

The application logic can be positive meaning if the packet is
valid jump / do  actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.

Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.

When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.

First option: Add integrity flags to each of the items.
For example add checksum_ok to IPv4 item.

Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.

Cons:
1. API breakage.
2. Increase number of flows, since app can't add global rule and must
   have dedicated flow for each of the flow combinations, for example
   matching on ICMP traffic or UDP/TCP  traffic with IPv4 / IPv6 will
   result in 5 flows.

Second option: dedicated item

Pros:
1. No API breakage, and there will be no for some time due to having
   extra space. (by using bits)
2. Just one flow to support the ICMP or UDP/TCP traffic with IPv4 /
   IPv6.
3. Simplicity application can just look at one place to see all possible
   checks.
4. Allow future support for more tests.

Cons:
1. New item, that holds number of fields from different items.

For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
   passed. This may mean that in some HW such flow should be split to
   number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
   L3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
   have L4 layer this check should fail.
5. l2_crc_ok - the layer 2 CRC is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. It is possible that the
   IPv4 checksum will be O.K. but the l3_ok will be 0. It is not
   possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 length is smaller than the
   frame length.

Example of usage:
1. Check packets from all possible layers for integrity.
   flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....

2. Check only packet with layer 4 (UDP / TCP)
   flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
   l4_ok = 1

Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 19:05:17 +02:00
Qi Zhang
8bb87d65ba doc: update matching versions in ice guide
Updated ice recommended matching list for DPDK 21.02.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2021-04-19 18:37:00 +02:00
Qi Zhang
d63dab349a doc: fix matching versions in ice guide
Fixed matching kernel driver version for DPDK 20.11.

Fixes: e89aebf3b5 ("doc: update ice user guide")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2021-04-19 18:37:00 +02:00
Bing Zhao
4b61b8774b ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.

The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.

There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
   within a flow rule. Such action is tied to its flow rule and
   cannot be reused.
2. the indirect action, in the past, named shared_action. It is
   created from a direct actioni, like count or rss, and then used
   in the flow rules with an object handle. The PMD will take care
   of the retrieve from indirect action to the direct action
   when it is referenced.

The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.

The old name "shared" is improper in a sense and should be replaced.

Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.

The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
   counter, the only "update" supported should be the reset. So
   passing a rte_flow_action struct pointer is meaningless and
   there is even no such corresponding action struct. What's more,
   if more than one operations should be supported, for some other
   action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
   parameter will not provide the ability to indicate which part(s)
   to update.
   For different types of indirect action objects, the pointer could
   either be the same of rte_flow_action* struct - in order not to
   break the current driver implementation, or some wrapper
   structures with bits as masks to indicate which part to be
   updated, depending on real needs of the corresponding direct
   action. For different direct actions, the structures of indirect
   action objects updating will be different.

All the underlayer PMD callbacks will be moved to these new APIs.

The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.

Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 18:25:42 +02:00
Min Hu (Connor)
e2cd696ba4 doc: add Kunpeng 930 support in hns3 guide
Hns3 PMD has already supported Kunpeng 930 SoC.

This patch added description for it.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-04-19 18:25:42 +02:00
Lijun Ou
9ad9ff476c ethdev: add queue state in queried queue information
Currently, upper-layer application could get queue state only
through pointers such as dev->data->tx_queue_state[queue_id],
this is not the recommended way to access it. So this patch
add get queue state when call rte_eth_rx_queue_info_get and
rte_eth_tx_queue_info_get API.

Note: After add queue_state field, the 'struct rte_eth_rxq_info' size
remains 128B, and the 'struct rte_eth_txq_info' size remains 64B, so
it could be ABI compatible.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-04-19 18:25:35 +02:00
Min Hu (Connor)
be1650f734 doc: fix HiSilicon copyright syntax
This patch fixes HiSilicon copyright syntax.

According to the suggestion of our legal department,
to standardize the copyright license of our code to
avoid potential copyright risks, we make a unified
modification to the "Hisilicon", which was nonstandard,
in the main modules we maintain.

We change it to "HiSilicon", which is consistent with
the terms used on the following official website:
https://www.hisilicon.com/en/terms-of-use.

Fixes: 565829db8b ("net/hns3: add build and doc infrastructure")
Cc: stable@dpdk.org

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-04-16 17:42:15 +02:00
Huisong Li
a2258ea1be doc: remove queue stats mapping from testpmd guide
The "--tx-queue-stats-mapping" and "--rx-queue-stats-mapping"
and display and clear of "stats_map" have been removed from
testpmd.

This patch deletes some descriptions about queue stats mapping
in testpmd doc.

Fixes: 08dcd18706 ("app/testpmd: fix queue stats mapping configuration")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-04-16 17:42:15 +02:00
Bruce Richardson
99a2dd955f lib: remove librte_ prefix from directory names
There is no reason for the DPDK libraries to all have 'librte_' prefix on
the directory names. This prefix makes the directory names longer and also
makes it awkward to add features referring to individual libraries in the
build - should the lib names be specified with or without the prefix.
Therefore, we can just remove the library prefix and use the library's
unique name as the directory name, i.e. 'eal' rather than 'librte_eal'

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2021-04-21 14:04:09 +02:00
Bruce Richardson
f2cdd95f2d doc: add Meson coding style to contributors guide
To help with consistency across all files, add a section to the
contributors guide on meson coding style. Although short, this covers
the basics for now, and can be extended in future as we see the need.

Meson style guide recommends four-space indents, like for python,
so add to editorconfig file.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2021-04-21 14:04:09 +02:00
Bruce Richardson
8dcb898c65 build: change indentation in infrastructure files
Switch from using tabs to 4 spaces for meson.build indentation, for the
basic infrastructure and tooling files, as well as doc and kernel
directories.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2021-04-21 14:04:09 +02:00
Vladimir Medvedkin
28ebff11c2 hash: add predictable RSS
This patch adds predictable RSS API.
It is based on the idea of searching partial Toeplitz hash collisions.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
2021-04-20 23:13:23 +02:00
Vladimir Medvedkin
534fe5f339 doc: add Toeplitz hash guide
Add documentation for the Toeplitz hash library.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Reviewed-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
2021-04-20 23:12:47 +02:00
Conor Walsh
6a094e3285 examples/l3fwd: implement FIB lookup method
This patch implements the Forwarding Information Base (FIB) library
in l3fwd using the function calls and infrastructure introduced in
the previous patch.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
2021-04-20 20:18:29 +02:00
Conor Walsh
9510dd1feb examples/l3fwd: add FIB infrastructure
The purpose of this commit is to add the necessary function calls
and supporting infrastructure to allow the Forwarding Information Base
(FIB) library to be integrated into the l3fwd sample app.
Instead of adding an individual flag for FIB, a new flag '--lookup' has
been added that allows the user to select their desired lookup method.
The flags '-E' and '-L' have been retained for backwards compatibility.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2021-04-20 20:13:34 +02:00
Akhil Goyal
f96a8ebb27 eventdev: introduce crypto adapter enqueue API
In case an event from a previous stage is required to be forwarded
to a crypto adapter and PMD supports internal event port in crypto
adapter, exposed via capability
RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have
a way to check in the API rte_event_enqueue_burst(), whether it is
for crypto adapter or for eth tx adapter.

Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(),
which can send to a crypto adapter.

Note that RTE_EVENT_TYPE_* cannot be used to make that decision,
as it is meant for event source and not event destination.
And event port designated for crypto adapter is designed to be used
for OP_NEW mode.

Hence, in order to support an event PMD which has an internal event port
in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed
via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD,
application should use rte_event_crypto_adapter_enqueue() API to enqueue
events.

When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode),
application can use API rte_event_enqueue_burst() as it was doing earlier,
i.e. retrieve event port used by crypto adapter and bind its event queues
to that port and enqueue events using the API rte_event_enqueue_burst().

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-04-17 18:49:52 +02:00
Yuying Zhang
2321e34c23 net/ice: support flow priority for DCF switch filter
Support rte flow priority attribute for DCF switch filter.
When a packet is matched by two rules, the behavior of it
is not defined. This patch supports flow priority to create
different recipes for this situation. Only priority 0 and 1
are supported and higher value denotes higher priority.

for example:
1. flow create 0 priority 0 ingress pattern eth / vlan tci is 2 / vlan
   tci is 2 / end actions vf id 2 / end
2. flow create 0 priority 1 ingress pattern eth / vlan / vlan / ipv4 dst
   is 192.168.0.1 / end actions vf id 1 / end

These two rules can be created at the same time in DCF switch
filter and priority of rule 2 is higher. Packet hits rule 2
when two conditions of rules are satisfied.

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-04-16 12:22:00 +02:00
Yuying Zhang
a65126d1ad net/ice: support GTPU TEID pattern for switch filter
Enable GTPU pattern for CVL switch filter. Support teid and
qfi field of GTPU pattern. Patterns without inner l3/l4 field
support outer dst/src ip. Patterns with inner l3/l4 field only
support inner dst/src ip and inner dst/src port.

+----------------------------------+------------------------------------+
| Pattern                          | Input Set                          |
+----------------------------------+------------------------------------+
| pattern_eth_ipv4_gtpu            | teid, dst/src ip                   |
| pattern_eth_ipv6_gtpu            | teid, dst/src ip                   |
| pattern_eth_ipv4_gtpu_ipv4       | teid, dst/src ip                   |
| pattern_eth_ipv4_gtpu_ipv4_tcp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv4_gtpu_ipv4_udp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv4_gtpu_ipv6       | teid, dst/src ip                   |
| pattern_eth_ipv4_gtpu_ipv6_tcp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv4_gtpu_ipv6_udp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv6_gtpu_ipv4       | teid, dst/src ip                   |
| pattern_eth_ipv6_gtpu_ipv4_tcp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv6_gtpu_ipv4_udp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv6_gtpu_ipv6       | teid, dst/src ip                   |
| pattern_eth_ipv6_gtpu_ipv6_tcp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv6_gtpu_ipv6_udp   | teid, dst/src ip, dst/src port     |
| pattern_eth_ipv4_gtpu_eh_ipv4    | teid, qfi, dst/src ip              |
| pattern_eth_ipv4_gtpu_eh_ipv4_tcp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv4_gtpu_eh_ipv4_udp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv4_gtpu_eh_ipv6    | teid, qfi, dst/src ip              |
| pattern_eth_ipv4_gtpu_eh_ipv6_tcp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv4_gtpu_eh_ipv6_udp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv6_gtpu_eh_ipv4    | teid, qfi, dst/src ip              |
| pattern_eth_ipv6_gtpu_eh_ipv4_tcp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv6_gtpu_eh_ipv4_udp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv6_gtpu_eh_ipv6    | teid, qfi, dst/src ip              |
| pattern_eth_ipv6_gtpu_eh_ipv6_tcp| teid, qfi, dst/src ip, dst/src port|
| pattern_eth_ipv6_gtpu_eh_ipv6_udp| teid, qfi, dst/src ip, dst/src port|
+----------------------------------+------------------------------------+

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-04-15 14:22:13 +02:00
Huisong Li
09e0de1f41 net/hns3: report speed capability for PF
The speed capability of the device can be reported to the upper-layer app
in rte_eth_dev_info_get API. In this API, the speed capability is derived
from the 'supported_speed', which is the speed capability actually
supported by the NIC. The value of the 'supported_speed' is obtained
once in the probe stage and may be updated in the scheduled task to deal
with the change of the transmission interface.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-04-15 02:55:04 +02:00