Add missing statements to invalidate MAE resource IDs.
Fixes: dadff137931c ("net/sfc: support encap flow items in transfer rules")
Fixes: 1bbd1ec2348a ("net/sfc: support action VXLAN encap in MAE backend")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Add the create/del policy CLIs to support actions per color.
The CLIs are:
Create: add port meter policy (port_id) (policy_id) g_actions (actions)
y_actions (actions) r_actions (actions)
Delete: del port meter policy (port_id) (policy_id)
Examples:
testpmd> add port meter policy 0 1 g_actions rss / end y_actions end
r_actions drop / end
testpmd> del port meter policy 0 1
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, the flow meter policy does not support multiple actions
per color; also the allowed action types per color are very limited.
In addition, the policy cannot be pre-defined.
Due to the growing in flow actions offload abilities there is a potential
for the user to use variety of actions per color differently.
This new meter policy API comes to allow this potential in the most ethdev
common way using rte_flow action definition.
A list of rte_flow actions will be provided by the user per color
in order to create a meter policy.
In addition, the API forces to pre-define the policy before
the meters creation in order to allow sharing of single policy
with multiple meters efficiently.
meter_policy_id is added into struct rte_mtr_params.
So that it can get the policy during the meters creation.
Allow coloring the packet using a new rte_flow_action_color
as could be done by the old policy API.
Add two common policy template as macros in the head file.
The next API function were added:
- rte_mtr_meter_policy_add
- rte_mtr_meter_policy_delete
- rte_mtr_meter_policy_update
- rte_mtr_meter_policy_validate
The next struct was changed:
- rte_mtr_params
- rte_mtr_capabilities
The next API was deleted:
- rte_mtr_policer_actions_update
To support this API the following app were changed:
app/test-flow-perf: clean meter policer
app/testpmd: clean meter policer
To support this API the following drivers were changed:
net/softnic: support meter policy API
1. Cleans meter rte_mtr_policer_action.
2. Supports policy API to get color action as policer action did.
The color action will be mapped into rte_table_action_policer.
net/mlx5: clean meter creation management
Cleans and breaks part of the current meter management
in order to allow better design with policy API.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
When promiscuous mode is disabled, allmulticast is
also disabled, even if it was previously enabled.
Add a test in ice_promisc_disable()
to check if allmulticast should be kept enabled.
Fixes: c945e4bf9063 ("net/ice: support promiscuous mode")
Cc: stable@dpdk.org
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Signed-off-by: Siwar Zitouni <siwar.zitouni@6wind.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Introduce i40e_simple_prep_pkts() as the preparation function for
simple Tx data path, as it's for sanity check for simple Tx.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The driver devices support creation of multiple flow tables.
Jump action can be used in order to move the packet steering
to different flow table.
Table 0 is always the root table for packet steering.
Jumping between tables may cause endless loops in steering mechanism,
that's why each table has level attribute,
the driver sub-system may not allow jumping to table with
equal or lower level than the current table.
Currently, in the driver, the table ID and level are always identical.
Allow multiple flow table creation with the same level attribute.
This patch adds the table id in flow table data entry, while
allocates the flow table, if the table level is same but the
different table id, the new table will be allocated with new
table object id. It supports 4M multiple flow tables on the
same level.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Synchronize ASO meter queue accesses from
different threads using a spinlock.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When ASO action is available, use it as the meter action
Signed-off-by: Shun Hao <shunh@nvidia.com>
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds the ASO queue management for flow meter,
includes send WQE and CQE handle functions.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Initialize the flow meter ASO SQ WQEs with
all the constant data that should not be updated
per enqueue operation.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Read and store the device capability of FLOW_METER_ASO general object,
using the DevX API.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds different PRM definitions, related to
ASO (Advanced Steering Operation) flow meter feature,
in MLX5 PMD code.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
ASO (Advanced Steering Operation) meter feature may require
to locate the flow context tag action after the ASO action.
When color register is shared by meter_id/flow_id, it's like:
Bits[0-7] A meter color value set by the HW.
Bits[8-31] A flow id and meter id set by SW.
Currently the tag action for meter writes all the bits
of the meter register, so it will potentially overwrite
meter color when ASO meter action is before the tag action.
Set only 24-MSB-bits of meter register in the meter tag action.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Meter statistics are each policer action each counter.
Totally 4 counters per each meter.
It causes cache missed
and lead to data forwarding performance low.
To optimize it, support pass counter for green
and drop counter for red.
Totally two counters per each meter.
Also use the global drop statistics for
all meter drop action.
Limitations as below:
1. It does not support yellow counter and return 0.
2. All the meter colors with drop action will be
counted only by the global drop statistics.
3. Red color must be with drop action.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, packets after meter will be steered to a global policer
table,
which includes green/red color rules for every meter, so as to have
counter statistics of each color in every meter.
There's a bug that all the rules in global policer table are matching
only color criteria, so all packets will be counted to one meter only,
and other meter statistics are always zero.
This patch does these:
1. The rules in policer table matches both meter index and color, so
packet after meter could be counted to the correct meter counter.
2. The meter index and flow index are now sharing the available
register bits dynamically. Meter index starts from lsb, and flow
index starts from msb.
Fixes: 46a5e6bc6a85 ("net/mlx5: prepare meter flow tables")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
8 bits are used for meter color in meter register. When the meter
register can be shared, the rest 24 bits can be used by others.
This adds the definition for the 24 bits that can be shared.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit adds table entry walk for the three level table.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The patch 'net/hns3: rename Rx burst function' changed `simple'
Rx function name from 'scalar' to 'scalar simple', but doc
ignored that.
This patch fixed it.
Fixes: aa5baf47e1a3 ("net/hns3: rename Rx burst function")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In hns3 PMD, as the handler always return 0, the return value
of a function 'rte_kvargs_process' no need to be checked. But
the API definition has return value, so 'void' could be used
to ignore that.
Fixes: a124f9e9591b ("net/hns3: add runtime config to select IO burst function")
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
params->leaf.cman has enum type which is not isomorphic with boolean
type, however it is used as a boolean expression.
This patch fixed it.
Fixes: c09c7847d892 ("net/hns3: support traffic management")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
'HNS3_RXD_LKBK_B' was defined in previous versions but no used.
This patch deleted it.
Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The recvfrom() syscall is only supported by AF_XDP sockets since
kernel 5.11. Only use it if busy polling is configured. We can
assume a kernel >= 5.11 is in use if busy polling is configured
so we can safely call recvfrom() in that case.
Fixes: 63e8989fe5a4 ("net/af_xdp: use recvfrom instead of poll syscall")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
The send() syscall on the Tx path is not concerned with busy polling
and as such its invocation should not depend on whether or not it is
configured. Fix this by distinguishing the conditions necessary for
syscalls on the Rx and Tx paths individually.
Fixes: 055a393626ed ("net/af_xdp: prefer busy polling")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
When DEV_RX_OFFLOAD_KEEP_CRC is enabled, the PMD will minus 4 bytes
of CRC from the size of a packet, but the NIC will strip the CRC
because the CRC strip bit in DVMOLR register is not cleared. This
will cause the size of a packet to be 4 bytes less.
This patch updates the CRC strip bit according to whether
DEV_RX_OFFLOAD_KEEP_CRC is enabled.
Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Driver cancels the health check alarm only if error recovery is enabled
in the FW. This can cause an issue. There is a small window where the
driver receives the async event from FW and port close is invoked
immediately. Driver clears BNXT_FLAG_RECOVERY_ENABLED flag when it gets
the async event from FW. As a result, the health check alarm will not
get canceled during port close and causes a segfault when the alarm tries
to read Heartbeat register.
Fix this by canceling the health check alarm unconditionally during
port stop.
Fixes: 9d0cbaecc91a ("net/bnxt: support periodic FW health monitoring")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
For Thor devices, RSS table can only accommodate 512 Rx queues.
When RSS is enabled, Cap the max Rx rings to 512.
For non-RSS case, the number will be limited by number of VNICs.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Fix resource cleanup in port close.
Once the pointers are freed, set them to NULL.
Make sure access to the pointers is validated before use.
Fixes: bb81e07323bb ("net/bnxt: support LED on/off")
Fixes: 804e746c7b73 ("net/bnxt: add hardware resource manager init code")
Fixes: 1d0704f4d793 ("net/bnxt: add device configure operation")
Fixes: 698aa7e95325 ("net/bnxt: add code to determine the Tx COS queue")
Fixes: 322bd6e70272 ("net/bnxt: add port representor infrastructure")
Fixes: 0bf5a0b5ebb8 ("net/bnxt: add a failure log")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Adding the bond device as its own slave should be forbidden. This
will cause a recursive endless loop in many subsequent operations,
and eventually lead to coredump.
This problem was found in testpmd, the related logs are as follows:
testpmd> create bonded device 1 0
Created new bonded device net_bonding_testpmd_0 on (port 4).
testpmd> add bonding slave 4 4
Segmentation fault (core dumped)
The call stack is as follows:
0x000000000064eb90 in rte_eth_dev_info_get ()
0x00000000006df4b4 in bond_ethdev_info ()
0x000000000064eb90 in rte_eth_dev_info_get ()
0x00000000006df4b4 in bond_ethdev_info ()
0x000000000064eb90 in rte_eth_dev_info_get ()
0x0000000000564e58 in eth_dev_info_get_print_err ()
0x000000000055e8a4 in init_port_config ()
0x000000000052730c in cmd_add_bonding_slave_parsed ()
0x0000000000646f60 in cmdline_parse ()
0x0000000000645e08 in cmdline_valid_buffer ()
0x000000000064956c in rdline_char_in ()
0x0000000000645ee0 in cmdline_in ()
0x00000000006460a4 in cmdline_interact ()
0x0000000000531904 in prompt ()
0x000000000051cca8 in main ()
Fixes: 2efb58cbab6e ("bond: new link bonding library")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In max10_staging_area_init(), variable "size" from fdt_get_reg() may
be invalid, it should be checked before assigning to member variable
"staging_area_size" of structure "intel_max10_device".
Coverity issue: 367480, 367482
Fixes: 96ebfcf8125c ("raw/ifpga/base: add SPI and MAX10 device driver")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Tianfei Zhang <tianfei.zhang@intel.com>
This patch moves the check for "link_speeds" in dev_conf to
dev_configure, so that users know whether "link_speeds" is valid in
advance.
Fixes: bdaf190f8235 ("net/hns3: support link speed autoneg for PF")
Fixes: 400d307e1a60 ("net/hns3: support fixed link speed")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the fdir lock was used to protect concurrent access in
multiple processes, it has the following problems:
1) Lack of protection for fdir reset recover.
2) Only part of data is protected, eg. the filterlist is not protected.
We use the following scheme:
1) Del the fdir lock.
2) Add a flow lock and provides rte flow driver ops API-level
protection.
3) Declare support RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE.
Fixes: fcba820d9b9e ("net/hns3: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
HNS3 PF driver only supports RSS, DCB or NONE multiple queues mode.
Currently, driver doesn't verify the VMDq multi-queue mode completely.
This patch fixes the verification for VMDq mode.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, "ONLY DCB" and "DCB+RSS" mode are both supported by HNS3
PF driver. But the driver verifies only the "DCB+RSS" multiple queues
mode.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Some mbx messages do not need to reply with data. In this case,
it is no need to set the response data address and the response
length.
This patch removes these redundant codes from mbx messages that do
not need be replied.
Fixes: a5475d61fa34 ("net/hns3: support VF")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Rx vector implementation depends on the mbuf fields
(such as rearm_data/rx_descriptor_fields1) layout, this patch adds
compile-time verification for this.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add the new items to support the flow configuration for IP fragment
packets.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
The i40evf PMD will be deprecated, iavf will be the only VF driver for
Intel 700 serial (i40e) NIC family.
To reach this, there will be 2 steps:
Step 1: iavf will be the default VF driver, while i40evf still can be
selected by devarg: "driver=i40evf".
This is covered by this patch, which include:
1) add all 700 serial NIC VF device ID into iavf PMD
2) skip probe if devargs contain "driver=i40evf" in iavf
3) continue probe if devargs contain "driver=i40evf" in i40evf
Step 2: i40evf and related devarg are removed, this will happen at DPDK
21.11
Between step 1 and step 2, no new feature will be added into i40evf
except bug fix.
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Due to hardware limitations the VLAN push/pop and decap actions following
the sample action are supported in the FDB Tx steering domain only, the
flows with incorrect action order for other domains are rejected by
rdma-core.
To provide the action order requested in flow API this patch checks for
the VLAN or decap precedence to the sample action and moves the VLAN or
decap actions into the next flow in the new table and adds the jump
action in the prefix sample flow.
This patch also adds the validation for these combination actions.
Fixes: 255b8f86eb6e ("net/mlx5: fix E-Switch egress mirror flow validation")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Modify API mlx5_flow_dev_dump to support the feature.
Modify mlx5_socket since one extra arg flow_ptr is added.
The data structure sent to DPDK application from the utility triggering
the flow dumps should be packed and endianness must be specified.
The native host endianness can be used, all exchange happens within
the same host (we use sendmsg aux data and share the file handle,
remote approach is not applicable, no inter-host communication happens).
The message structure to dump one/all flow(s):
struct mlx5_flow_dump_req {
uint32_t port_id;
uint64_t flow_ptr;
} __rte_packed;
If flow_ptr is 0, all flows for the specified port will be dumped.
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch supports check max SIMD bitwidth when choosing NEON and SVE
vector path.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the L3L4P/L3E/L4E/OL3E/OL4E fields in Rx descriptor used to
indicate hardware checksum result:
1. L3L4P: indicates hardware has processed L3L4 checksum for this
packet, if this bit is 1 then L3E/L4E/OL3E/OL4E is trustable.
2. L3E: L3 checksum error indication, 1 means with error.
3. L4E: L4 checksum error indication, 1 means with error.
4. OL3E: outer L3 checksum error indication, 1 means with error.
5. OL4E: outer L4 checksum error indication, 1 means with error.
Driver will set the good checksum flag through packet type and
L3E/L4E/OL3E/OL4E when L3L4P is 1, it runs as follows:
1. If packet type indicates it's tunnel packet:
1.1. If packet type indicates it has inner L3 and L3E is zero, then
mark the IP checksum good.
1.2. If packet type indicates it has inner L4 and L4E is zero, then
mark the L4 checksum good.
1.3. If packet type indicates it has outer L4 and OL4E is zero, then
mark the outer L4 checksum good.
2. If packet type indicates it's not tunnel packet:
2.1. If packet type indicates it has L3 and L3E is zero, then mark the
IP checksum good.
2.2. If packet type indicates it has L4 and L4E is zero, then mark the
L4 checksum good.
As described above, the good checksum calculation is time consuming,
it impacts the Rx performance.
By balancing performance and functionality, driver uses the following
scheme to set good checksum flag when L3L4P is 1:
1. If L3E is zero, then mark the IP checksum good.
2. If L4E is zero, then mark the L4 checksum good.
The performance gains are 3% in small packet iofwd scenarios.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch supports runtime config of mask device capability, it was
used to mask the capability which queried from firmware.
The device argument key is "dev_caps_mask" which takes hexadecimal
bitmask where each bit represents whether mask corresponding capability.
Its main purpose is to debug and avoid problems.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The integrity item allows the application to match
on the integrity of a packet.
Usage example:
match that packet integrity checks are OK. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
Match that L4 packet is OK - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to IPv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. Increase number of flows, since app can't add global rule and must
have dedicated flow for each of the flow combinations, for example
matching on ICMP traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the ICMP or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be split to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
L3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have L4 layer this check should fail.
5. l2_crc_ok - the layer 2 CRC is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. It is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. It is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 length is smaller than the
frame length.
Example of usage:
1. Check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>