2018-01-29 13:11:26 +00:00
|
|
|
/* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
* Copyright 2016 6WIND S.A.
|
2018-03-20 19:20:35 +00:00
|
|
|
* Copyright 2016 Mellanox Technologies, Ltd
|
2016-12-21 14:51:17 +00:00
|
|
|
*/
|
|
|
|
|
2017-07-07 00:08:31 +00:00
|
|
|
#include <errno.h>
|
|
|
|
#include <stddef.h>
|
2016-12-21 14:51:17 +00:00
|
|
|
#include <stdint.h>
|
2017-07-07 00:08:31 +00:00
|
|
|
#include <string.h>
|
2016-12-21 14:51:17 +00:00
|
|
|
|
2017-07-07 00:08:31 +00:00
|
|
|
#include <rte_common.h>
|
2016-12-21 14:51:17 +00:00
|
|
|
#include <rte_errno.h>
|
|
|
|
#include <rte_branch_prediction.h>
|
2018-08-31 09:01:02 +00:00
|
|
|
#include <rte_string_fns.h>
|
ethdev: extend flow metadata
Currently, metadata can be set on egress path via mbuf tx_metadata field
with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata.
This patch extends the metadata feature usability.
1) RTE_FLOW_ACTION_TYPE_SET_META
When supporting multiple tables, Tx metadata can also be set by a rule and
matched by another rule. This new action allows metadata to be set as a
result of flow match.
2) Metadata on ingress
There's also need to support metadata on ingress. Metadata can be set by
SET_META action and matched by META item like Tx. The final value set by
the action will be delivered to application via metadata dynamic field of
mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_RX_DYNF_METADATA flag will be set along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior to use SET_META action.
The availability of dynamic mbuf metadata field can be checked
with rte_flow_dynf_metadata_avail() routine.
If application is going to engage the metadata feature it registers
the metadata dynamic fields, then PMD checks the metadata field
availability and handles the appropriate fields in datapath.
For loopback/hairpin packet, metadata set on Rx/Tx may or may not be
propagated to the other path depending on hardware capability.
MARK and METADATA look similar and might operate in similar way,
but not interacting.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK action
stores some specified value in the metadata storage, and, on the packet
receiving PMD puts the flag and value to the mbuf and applications can
see the packet was threated inside flow engine according to the appropriate
RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some
per-packet information from the flow engine to the application via
receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK
provided. It allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other flows.
From the datapath point of view, the MARK and FLAG are related to the
receiving side only. It would useful to have the same gateway on the
transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META
was proposed. The application can fill the field in mbuf and this value
will be transferred to some field in the packet metadata inside the flow
engine. It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata inside
the flow engine and gateways to exchange these values on a per-packet basis
via datapaths.
As we can see, the MARK and META means are not symmetric, there is absent
action which would allow us to set META value on the transmitting path.
So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META was proposed.
The next, applications raise the new requirements for packet metadata.
The flow ngines are getting more complex, internal switches are introduced,
multiple ports might be supported within the same flow engine namespace.
From the DPDK points of view, it means the packets might be sent on one
eth_dev port and received on the other one, and the packet path inside
the flow engine entirely belongs to the same hardware device. The simplest
example is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to transfer
some extra data from one port to another one, besides the packet data
itself. And applications would like to use this opportunity.
It is supposed for application to use trials (with rte_flow_validate)
to detect which metadata features (FLAG, MARK, META) actually supported
by PMD and underlying hardware. It might depend on PMD configuration,
system software, hardware settings, etc., and should be detected
in run time.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
2019-11-05 14:19:30 +00:00
|
|
|
#include <rte_mbuf.h>
|
|
|
|
#include <rte_mbuf_dyn.h>
|
2016-12-21 14:51:17 +00:00
|
|
|
#include "rte_ethdev.h"
|
|
|
|
#include "rte_flow_driver.h"
|
|
|
|
#include "rte_flow.h"
|
|
|
|
|
ethdev: extend flow metadata
Currently, metadata can be set on egress path via mbuf tx_metadata field
with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata.
This patch extends the metadata feature usability.
1) RTE_FLOW_ACTION_TYPE_SET_META
When supporting multiple tables, Tx metadata can also be set by a rule and
matched by another rule. This new action allows metadata to be set as a
result of flow match.
2) Metadata on ingress
There's also need to support metadata on ingress. Metadata can be set by
SET_META action and matched by META item like Tx. The final value set by
the action will be delivered to application via metadata dynamic field of
mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_RX_DYNF_METADATA flag will be set along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior to use SET_META action.
The availability of dynamic mbuf metadata field can be checked
with rte_flow_dynf_metadata_avail() routine.
If application is going to engage the metadata feature it registers
the metadata dynamic fields, then PMD checks the metadata field
availability and handles the appropriate fields in datapath.
For loopback/hairpin packet, metadata set on Rx/Tx may or may not be
propagated to the other path depending on hardware capability.
MARK and METADATA look similar and might operate in similar way,
but not interacting.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK action
stores some specified value in the metadata storage, and, on the packet
receiving PMD puts the flag and value to the mbuf and applications can
see the packet was threated inside flow engine according to the appropriate
RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some
per-packet information from the flow engine to the application via
receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK
provided. It allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other flows.
From the datapath point of view, the MARK and FLAG are related to the
receiving side only. It would useful to have the same gateway on the
transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META
was proposed. The application can fill the field in mbuf and this value
will be transferred to some field in the packet metadata inside the flow
engine. It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata inside
the flow engine and gateways to exchange these values on a per-packet basis
via datapaths.
As we can see, the MARK and META means are not symmetric, there is absent
action which would allow us to set META value on the transmitting path.
So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META was proposed.
The next, applications raise the new requirements for packet metadata.
The flow ngines are getting more complex, internal switches are introduced,
multiple ports might be supported within the same flow engine namespace.
From the DPDK points of view, it means the packets might be sent on one
eth_dev port and received on the other one, and the packet path inside
the flow engine entirely belongs to the same hardware device. The simplest
example is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to transfer
some extra data from one port to another one, besides the packet data
itself. And applications would like to use this opportunity.
It is supposed for application to use trials (with rte_flow_validate)
to detect which metadata features (FLAG, MARK, META) actually supported
by PMD and underlying hardware. It might depend on PMD configuration,
system software, hardware settings, etc., and should be detected
in run time.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
2019-11-05 14:19:30 +00:00
|
|
|
/* Mbuf dynamic field name for metadata. */
|
2020-04-17 17:14:53 +00:00
|
|
|
int32_t rte_flow_dynf_metadata_offs = -1;
|
ethdev: extend flow metadata
Currently, metadata can be set on egress path via mbuf tx_metadata field
with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata.
This patch extends the metadata feature usability.
1) RTE_FLOW_ACTION_TYPE_SET_META
When supporting multiple tables, Tx metadata can also be set by a rule and
matched by another rule. This new action allows metadata to be set as a
result of flow match.
2) Metadata on ingress
There's also need to support metadata on ingress. Metadata can be set by
SET_META action and matched by META item like Tx. The final value set by
the action will be delivered to application via metadata dynamic field of
mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_RX_DYNF_METADATA flag will be set along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior to use SET_META action.
The availability of dynamic mbuf metadata field can be checked
with rte_flow_dynf_metadata_avail() routine.
If application is going to engage the metadata feature it registers
the metadata dynamic fields, then PMD checks the metadata field
availability and handles the appropriate fields in datapath.
For loopback/hairpin packet, metadata set on Rx/Tx may or may not be
propagated to the other path depending on hardware capability.
MARK and METADATA look similar and might operate in similar way,
but not interacting.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK action
stores some specified value in the metadata storage, and, on the packet
receiving PMD puts the flag and value to the mbuf and applications can
see the packet was threated inside flow engine according to the appropriate
RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some
per-packet information from the flow engine to the application via
receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK
provided. It allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other flows.
From the datapath point of view, the MARK and FLAG are related to the
receiving side only. It would useful to have the same gateway on the
transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META
was proposed. The application can fill the field in mbuf and this value
will be transferred to some field in the packet metadata inside the flow
engine. It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata inside
the flow engine and gateways to exchange these values on a per-packet basis
via datapaths.
As we can see, the MARK and META means are not symmetric, there is absent
action which would allow us to set META value on the transmitting path.
So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META was proposed.
The next, applications raise the new requirements for packet metadata.
The flow ngines are getting more complex, internal switches are introduced,
multiple ports might be supported within the same flow engine namespace.
From the DPDK points of view, it means the packets might be sent on one
eth_dev port and received on the other one, and the packet path inside
the flow engine entirely belongs to the same hardware device. The simplest
example is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to transfer
some extra data from one port to another one, besides the packet data
itself. And applications would like to use this opportunity.
It is supposed for application to use trials (with rte_flow_validate)
to detect which metadata features (FLAG, MARK, META) actually supported
by PMD and underlying hardware. It might depend on PMD configuration,
system software, hardware settings, etc., and should be detected
in run time.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
2019-11-05 14:19:30 +00:00
|
|
|
|
|
|
|
/* Mbuf dynamic field flag bit number for metadata. */
|
|
|
|
uint64_t rte_flow_dynf_metadata_mask;
|
|
|
|
|
2017-07-07 00:08:31 +00:00
|
|
|
/**
|
|
|
|
* Flow elements description tables.
|
|
|
|
*/
|
|
|
|
struct rte_flow_desc_data {
|
|
|
|
const char *name;
|
|
|
|
size_t size;
|
|
|
|
};
|
|
|
|
|
|
|
|
/** Generate flow_item[] entry. */
|
|
|
|
#define MK_FLOW_ITEM(t, s) \
|
|
|
|
[RTE_FLOW_ITEM_TYPE_ ## t] = { \
|
|
|
|
.name = # t, \
|
|
|
|
.size = s, \
|
|
|
|
}
|
|
|
|
|
|
|
|
/** Information about known flow pattern items. */
|
|
|
|
static const struct rte_flow_desc_data rte_flow_desc_item[] = {
|
|
|
|
MK_FLOW_ITEM(END, 0),
|
|
|
|
MK_FLOW_ITEM(VOID, 0),
|
|
|
|
MK_FLOW_ITEM(INVERT, 0),
|
|
|
|
MK_FLOW_ITEM(ANY, sizeof(struct rte_flow_item_any)),
|
|
|
|
MK_FLOW_ITEM(PF, 0),
|
|
|
|
MK_FLOW_ITEM(VF, sizeof(struct rte_flow_item_vf)),
|
2018-04-25 15:28:06 +00:00
|
|
|
MK_FLOW_ITEM(PHY_PORT, sizeof(struct rte_flow_item_phy_port)),
|
2018-04-25 15:28:10 +00:00
|
|
|
MK_FLOW_ITEM(PORT_ID, sizeof(struct rte_flow_item_port_id)),
|
2018-04-25 15:27:48 +00:00
|
|
|
MK_FLOW_ITEM(RAW, sizeof(struct rte_flow_item_raw)),
|
2017-07-07 00:08:31 +00:00
|
|
|
MK_FLOW_ITEM(ETH, sizeof(struct rte_flow_item_eth)),
|
|
|
|
MK_FLOW_ITEM(VLAN, sizeof(struct rte_flow_item_vlan)),
|
|
|
|
MK_FLOW_ITEM(IPV4, sizeof(struct rte_flow_item_ipv4)),
|
|
|
|
MK_FLOW_ITEM(IPV6, sizeof(struct rte_flow_item_ipv6)),
|
|
|
|
MK_FLOW_ITEM(ICMP, sizeof(struct rte_flow_item_icmp)),
|
|
|
|
MK_FLOW_ITEM(UDP, sizeof(struct rte_flow_item_udp)),
|
|
|
|
MK_FLOW_ITEM(TCP, sizeof(struct rte_flow_item_tcp)),
|
|
|
|
MK_FLOW_ITEM(SCTP, sizeof(struct rte_flow_item_sctp)),
|
|
|
|
MK_FLOW_ITEM(VXLAN, sizeof(struct rte_flow_item_vxlan)),
|
|
|
|
MK_FLOW_ITEM(E_TAG, sizeof(struct rte_flow_item_e_tag)),
|
|
|
|
MK_FLOW_ITEM(NVGRE, sizeof(struct rte_flow_item_nvgre)),
|
2018-08-31 09:01:11 +00:00
|
|
|
MK_FLOW_ITEM(MPLS, sizeof(struct rte_flow_item_mpls)),
|
|
|
|
MK_FLOW_ITEM(GRE, sizeof(struct rte_flow_item_gre)),
|
|
|
|
MK_FLOW_ITEM(FUZZY, sizeof(struct rte_flow_item_fuzzy)),
|
|
|
|
MK_FLOW_ITEM(GTP, sizeof(struct rte_flow_item_gtp)),
|
|
|
|
MK_FLOW_ITEM(GTPC, sizeof(struct rte_flow_item_gtp)),
|
|
|
|
MK_FLOW_ITEM(GTPU, sizeof(struct rte_flow_item_gtp)),
|
|
|
|
MK_FLOW_ITEM(ESP, sizeof(struct rte_flow_item_esp)),
|
2017-12-01 10:43:15 +00:00
|
|
|
MK_FLOW_ITEM(GENEVE, sizeof(struct rte_flow_item_geneve)),
|
2018-04-23 12:16:32 +00:00
|
|
|
MK_FLOW_ITEM(VXLAN_GPE, sizeof(struct rte_flow_item_vxlan_gpe)),
|
2018-04-24 15:58:58 +00:00
|
|
|
MK_FLOW_ITEM(ARP_ETH_IPV4, sizeof(struct rte_flow_item_arp_eth_ipv4)),
|
|
|
|
MK_FLOW_ITEM(IPV6_EXT, sizeof(struct rte_flow_item_ipv6_ext)),
|
2020-10-14 16:35:48 +00:00
|
|
|
MK_FLOW_ITEM(IPV6_FRAG_EXT, sizeof(struct rte_flow_item_ipv6_frag_ext)),
|
2018-04-24 15:58:58 +00:00
|
|
|
MK_FLOW_ITEM(ICMP6, sizeof(struct rte_flow_item_icmp6)),
|
|
|
|
MK_FLOW_ITEM(ICMP6_ND_NS, sizeof(struct rte_flow_item_icmp6_nd_ns)),
|
|
|
|
MK_FLOW_ITEM(ICMP6_ND_NA, sizeof(struct rte_flow_item_icmp6_nd_na)),
|
|
|
|
MK_FLOW_ITEM(ICMP6_ND_OPT, sizeof(struct rte_flow_item_icmp6_nd_opt)),
|
|
|
|
MK_FLOW_ITEM(ICMP6_ND_OPT_SLA_ETH,
|
|
|
|
sizeof(struct rte_flow_item_icmp6_nd_opt_sla_eth)),
|
|
|
|
MK_FLOW_ITEM(ICMP6_ND_OPT_TLA_ETH,
|
|
|
|
sizeof(struct rte_flow_item_icmp6_nd_opt_tla_eth)),
|
2018-08-31 09:01:11 +00:00
|
|
|
MK_FLOW_ITEM(MARK, sizeof(struct rte_flow_item_mark)),
|
2018-10-21 14:22:47 +00:00
|
|
|
MK_FLOW_ITEM(META, sizeof(struct rte_flow_item_meta)),
|
2019-10-27 18:42:28 +00:00
|
|
|
MK_FLOW_ITEM(TAG, sizeof(struct rte_flow_item_tag)),
|
2019-07-05 09:54:23 +00:00
|
|
|
MK_FLOW_ITEM(GRE_KEY, sizeof(rte_be32_t)),
|
2019-08-28 06:00:37 +00:00
|
|
|
MK_FLOW_ITEM(GTP_PSC, sizeof(struct rte_flow_item_gtp_psc)),
|
2019-08-28 06:00:38 +00:00
|
|
|
MK_FLOW_ITEM(PPPOES, sizeof(struct rte_flow_item_pppoe)),
|
|
|
|
MK_FLOW_ITEM(PPPOED, sizeof(struct rte_flow_item_pppoe)),
|
|
|
|
MK_FLOW_ITEM(PPPOE_PROTO_ID,
|
|
|
|
sizeof(struct rte_flow_item_pppoe_proto_id)),
|
2019-07-25 09:03:43 +00:00
|
|
|
MK_FLOW_ITEM(NSH, sizeof(struct rte_flow_item_nsh)),
|
2019-07-25 09:03:44 +00:00
|
|
|
MK_FLOW_ITEM(IGMP, sizeof(struct rte_flow_item_igmp)),
|
2019-07-25 09:03:45 +00:00
|
|
|
MK_FLOW_ITEM(AH, sizeof(struct rte_flow_item_ah)),
|
2019-10-22 04:16:48 +00:00
|
|
|
MK_FLOW_ITEM(HIGIG2, sizeof(struct rte_flow_item_higig2_hdr)),
|
2020-01-13 11:50:40 +00:00
|
|
|
MK_FLOW_ITEM(L2TPV3OIP, sizeof(struct rte_flow_item_l2tpv3oip)),
|
2020-03-06 06:39:26 +00:00
|
|
|
MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
|
2020-07-12 13:35:02 +00:00
|
|
|
MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
|
2021-01-17 10:21:15 +00:00
|
|
|
MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
|
2021-04-29 18:36:56 +00:00
|
|
|
MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
|
ethdev: introduce conntrack flow action and item
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 17:51:30 +00:00
|
|
|
MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
|
2017-07-07 00:08:31 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/** Generate flow_action[] entry. */
|
|
|
|
#define MK_FLOW_ACTION(t, s) \
|
|
|
|
[RTE_FLOW_ACTION_TYPE_ ## t] = { \
|
|
|
|
.name = # t, \
|
|
|
|
.size = s, \
|
|
|
|
}
|
|
|
|
|
|
|
|
/** Information about known flow actions. */
|
|
|
|
static const struct rte_flow_desc_data rte_flow_desc_action[] = {
|
|
|
|
MK_FLOW_ACTION(END, 0),
|
|
|
|
MK_FLOW_ACTION(VOID, 0),
|
|
|
|
MK_FLOW_ACTION(PASSTHRU, 0),
|
2018-08-31 09:01:11 +00:00
|
|
|
MK_FLOW_ACTION(JUMP, sizeof(struct rte_flow_action_jump)),
|
2017-07-07 00:08:31 +00:00
|
|
|
MK_FLOW_ACTION(MARK, sizeof(struct rte_flow_action_mark)),
|
|
|
|
MK_FLOW_ACTION(FLAG, 0),
|
|
|
|
MK_FLOW_ACTION(QUEUE, sizeof(struct rte_flow_action_queue)),
|
|
|
|
MK_FLOW_ACTION(DROP, 0),
|
2018-05-31 14:33:34 +00:00
|
|
|
MK_FLOW_ACTION(COUNT, sizeof(struct rte_flow_action_count)),
|
2018-04-25 15:27:48 +00:00
|
|
|
MK_FLOW_ACTION(RSS, sizeof(struct rte_flow_action_rss)),
|
2017-07-07 00:08:31 +00:00
|
|
|
MK_FLOW_ACTION(PF, 0),
|
|
|
|
MK_FLOW_ACTION(VF, sizeof(struct rte_flow_action_vf)),
|
2018-04-25 15:28:08 +00:00
|
|
|
MK_FLOW_ACTION(PHY_PORT, sizeof(struct rte_flow_action_phy_port)),
|
2018-04-25 15:28:10 +00:00
|
|
|
MK_FLOW_ACTION(PORT_ID, sizeof(struct rte_flow_action_port_id)),
|
2018-08-31 09:01:11 +00:00
|
|
|
MK_FLOW_ACTION(METER, sizeof(struct rte_flow_action_meter)),
|
|
|
|
MK_FLOW_ACTION(SECURITY, sizeof(struct rte_flow_action_security)),
|
2018-04-24 15:59:00 +00:00
|
|
|
MK_FLOW_ACTION(OF_SET_MPLS_TTL,
|
|
|
|
sizeof(struct rte_flow_action_of_set_mpls_ttl)),
|
|
|
|
MK_FLOW_ACTION(OF_DEC_MPLS_TTL, 0),
|
|
|
|
MK_FLOW_ACTION(OF_SET_NW_TTL,
|
|
|
|
sizeof(struct rte_flow_action_of_set_nw_ttl)),
|
|
|
|
MK_FLOW_ACTION(OF_DEC_NW_TTL, 0),
|
|
|
|
MK_FLOW_ACTION(OF_COPY_TTL_OUT, 0),
|
|
|
|
MK_FLOW_ACTION(OF_COPY_TTL_IN, 0),
|
2018-04-24 15:59:02 +00:00
|
|
|
MK_FLOW_ACTION(OF_POP_VLAN, 0),
|
|
|
|
MK_FLOW_ACTION(OF_PUSH_VLAN,
|
|
|
|
sizeof(struct rte_flow_action_of_push_vlan)),
|
|
|
|
MK_FLOW_ACTION(OF_SET_VLAN_VID,
|
|
|
|
sizeof(struct rte_flow_action_of_set_vlan_vid)),
|
|
|
|
MK_FLOW_ACTION(OF_SET_VLAN_PCP,
|
|
|
|
sizeof(struct rte_flow_action_of_set_vlan_pcp)),
|
|
|
|
MK_FLOW_ACTION(OF_POP_MPLS,
|
|
|
|
sizeof(struct rte_flow_action_of_pop_mpls)),
|
|
|
|
MK_FLOW_ACTION(OF_PUSH_MPLS,
|
|
|
|
sizeof(struct rte_flow_action_of_push_mpls)),
|
2018-08-31 09:01:11 +00:00
|
|
|
MK_FLOW_ACTION(VXLAN_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
|
|
|
|
MK_FLOW_ACTION(VXLAN_DECAP, 0),
|
|
|
|
MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
|
|
|
|
MK_FLOW_ACTION(NVGRE_DECAP, 0),
|
2018-10-22 17:38:09 +00:00
|
|
|
MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct rte_flow_action_raw_encap)),
|
|
|
|
MK_FLOW_ACTION(RAW_DECAP, sizeof(struct rte_flow_action_raw_decap)),
|
2018-10-09 08:44:36 +00:00
|
|
|
MK_FLOW_ACTION(SET_IPV4_SRC,
|
|
|
|
sizeof(struct rte_flow_action_set_ipv4)),
|
|
|
|
MK_FLOW_ACTION(SET_IPV4_DST,
|
|
|
|
sizeof(struct rte_flow_action_set_ipv4)),
|
|
|
|
MK_FLOW_ACTION(SET_IPV6_SRC,
|
|
|
|
sizeof(struct rte_flow_action_set_ipv6)),
|
|
|
|
MK_FLOW_ACTION(SET_IPV6_DST,
|
|
|
|
sizeof(struct rte_flow_action_set_ipv6)),
|
2018-10-09 08:44:37 +00:00
|
|
|
MK_FLOW_ACTION(SET_TP_SRC,
|
|
|
|
sizeof(struct rte_flow_action_set_tp)),
|
|
|
|
MK_FLOW_ACTION(SET_TP_DST,
|
|
|
|
sizeof(struct rte_flow_action_set_tp)),
|
2018-10-06 15:45:34 +00:00
|
|
|
MK_FLOW_ACTION(MAC_SWAP, 0),
|
2018-10-16 08:14:23 +00:00
|
|
|
MK_FLOW_ACTION(DEC_TTL, 0),
|
|
|
|
MK_FLOW_ACTION(SET_TTL, sizeof(struct rte_flow_action_set_ttl)),
|
2018-10-11 13:31:39 +00:00
|
|
|
MK_FLOW_ACTION(SET_MAC_SRC, sizeof(struct rte_flow_action_set_mac)),
|
|
|
|
MK_FLOW_ACTION(SET_MAC_DST, sizeof(struct rte_flow_action_set_mac)),
|
2019-07-02 14:44:26 +00:00
|
|
|
MK_FLOW_ACTION(INC_TCP_SEQ, sizeof(rte_be32_t)),
|
|
|
|
MK_FLOW_ACTION(DEC_TCP_SEQ, sizeof(rte_be32_t)),
|
|
|
|
MK_FLOW_ACTION(INC_TCP_ACK, sizeof(rte_be32_t)),
|
|
|
|
MK_FLOW_ACTION(DEC_TCP_ACK, sizeof(rte_be32_t)),
|
2019-10-27 18:42:28 +00:00
|
|
|
MK_FLOW_ACTION(SET_TAG, sizeof(struct rte_flow_action_set_tag)),
|
ethdev: extend flow metadata
Currently, metadata can be set on egress path via mbuf tx_metadata field
with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata.
This patch extends the metadata feature usability.
1) RTE_FLOW_ACTION_TYPE_SET_META
When supporting multiple tables, Tx metadata can also be set by a rule and
matched by another rule. This new action allows metadata to be set as a
result of flow match.
2) Metadata on ingress
There's also need to support metadata on ingress. Metadata can be set by
SET_META action and matched by META item like Tx. The final value set by
the action will be delivered to application via metadata dynamic field of
mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_RX_DYNF_METADATA flag will be set along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior to use SET_META action.
The availability of dynamic mbuf metadata field can be checked
with rte_flow_dynf_metadata_avail() routine.
If application is going to engage the metadata feature it registers
the metadata dynamic fields, then PMD checks the metadata field
availability and handles the appropriate fields in datapath.
For loopback/hairpin packet, metadata set on Rx/Tx may or may not be
propagated to the other path depending on hardware capability.
MARK and METADATA look similar and might operate in similar way,
but not interacting.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK action
stores some specified value in the metadata storage, and, on the packet
receiving PMD puts the flag and value to the mbuf and applications can
see the packet was threated inside flow engine according to the appropriate
RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some
per-packet information from the flow engine to the application via
receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK
provided. It allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other flows.
From the datapath point of view, the MARK and FLAG are related to the
receiving side only. It would useful to have the same gateway on the
transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META
was proposed. The application can fill the field in mbuf and this value
will be transferred to some field in the packet metadata inside the flow
engine. It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata inside
the flow engine and gateways to exchange these values on a per-packet basis
via datapaths.
As we can see, the MARK and META means are not symmetric, there is absent
action which would allow us to set META value on the transmitting path.
So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META was proposed.
The next, applications raise the new requirements for packet metadata.
The flow ngines are getting more complex, internal switches are introduced,
multiple ports might be supported within the same flow engine namespace.
From the DPDK points of view, it means the packets might be sent on one
eth_dev port and received on the other one, and the packet path inside
the flow engine entirely belongs to the same hardware device. The simplest
example is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to transfer
some extra data from one port to another one, besides the packet data
itself. And applications would like to use this opportunity.
It is supposed for application to use trials (with rte_flow_validate)
to detect which metadata features (FLAG, MARK, META) actually supported
by PMD and underlying hardware. It might depend on PMD configuration,
system software, hardware settings, etc., and should be detected
in run time.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
2019-11-05 14:19:30 +00:00
|
|
|
MK_FLOW_ACTION(SET_META, sizeof(struct rte_flow_action_set_meta)),
|
2020-01-07 07:24:01 +00:00
|
|
|
MK_FLOW_ACTION(SET_IPV4_DSCP, sizeof(struct rte_flow_action_set_dscp)),
|
|
|
|
MK_FLOW_ACTION(SET_IPV6_DSCP, sizeof(struct rte_flow_action_set_dscp)),
|
2020-04-21 10:11:38 +00:00
|
|
|
MK_FLOW_ACTION(AGE, sizeof(struct rte_flow_action_age)),
|
2020-10-09 13:46:04 +00:00
|
|
|
MK_FLOW_ACTION(SAMPLE, sizeof(struct rte_flow_action_sample)),
|
2021-01-18 21:40:25 +00:00
|
|
|
MK_FLOW_ACTION(MODIFY_FIELD,
|
|
|
|
sizeof(struct rte_flow_action_modify_field)),
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
/**
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
* Indirect action represented as handle of type
|
|
|
|
* (struct rte_flow_action_handle *) stored in conf field (see
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
* struct rte_flow_action); no need for additional structure to * store
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
* indirect action handle.
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
*/
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
MK_FLOW_ACTION(INDIRECT, 0),
|
ethdev: introduce conntrack flow action and item
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 17:51:30 +00:00
|
|
|
MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
|
2017-07-07 00:08:31 +00:00
|
|
|
};
|
|
|
|
|
ethdev: extend flow metadata
Currently, metadata can be set on egress path via mbuf tx_metadata field
with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata.
This patch extends the metadata feature usability.
1) RTE_FLOW_ACTION_TYPE_SET_META
When supporting multiple tables, Tx metadata can also be set by a rule and
matched by another rule. This new action allows metadata to be set as a
result of flow match.
2) Metadata on ingress
There's also need to support metadata on ingress. Metadata can be set by
SET_META action and matched by META item like Tx. The final value set by
the action will be delivered to application via metadata dynamic field of
mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_RX_DYNF_METADATA flag will be set along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior to use SET_META action.
The availability of dynamic mbuf metadata field can be checked
with rte_flow_dynf_metadata_avail() routine.
If application is going to engage the metadata feature it registers
the metadata dynamic fields, then PMD checks the metadata field
availability and handles the appropriate fields in datapath.
For loopback/hairpin packet, metadata set on Rx/Tx may or may not be
propagated to the other path depending on hardware capability.
MARK and METADATA look similar and might operate in similar way,
but not interacting.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK action
stores some specified value in the metadata storage, and, on the packet
receiving PMD puts the flag and value to the mbuf and applications can
see the packet was threated inside flow engine according to the appropriate
RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some
per-packet information from the flow engine to the application via
receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK
provided. It allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other flows.
From the datapath point of view, the MARK and FLAG are related to the
receiving side only. It would useful to have the same gateway on the
transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META
was proposed. The application can fill the field in mbuf and this value
will be transferred to some field in the packet metadata inside the flow
engine. It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata inside
the flow engine and gateways to exchange these values on a per-packet basis
via datapaths.
As we can see, the MARK and META means are not symmetric, there is absent
action which would allow us to set META value on the transmitting path.
So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META was proposed.
The next, applications raise the new requirements for packet metadata.
The flow ngines are getting more complex, internal switches are introduced,
multiple ports might be supported within the same flow engine namespace.
From the DPDK points of view, it means the packets might be sent on one
eth_dev port and received on the other one, and the packet path inside
the flow engine entirely belongs to the same hardware device. The simplest
example is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to transfer
some extra data from one port to another one, besides the packet data
itself. And applications would like to use this opportunity.
It is supposed for application to use trials (with rte_flow_validate)
to detect which metadata features (FLAG, MARK, META) actually supported
by PMD and underlying hardware. It might depend on PMD configuration,
system software, hardware settings, etc., and should be detected
in run time.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
2019-11-05 14:19:30 +00:00
|
|
|
int
|
|
|
|
rte_flow_dynf_metadata_register(void)
|
|
|
|
{
|
|
|
|
int offset;
|
|
|
|
int flag;
|
|
|
|
|
|
|
|
static const struct rte_mbuf_dynfield desc_offs = {
|
|
|
|
.name = RTE_MBUF_DYNFIELD_METADATA_NAME,
|
|
|
|
.size = sizeof(uint32_t),
|
|
|
|
.align = __alignof__(uint32_t),
|
|
|
|
};
|
|
|
|
static const struct rte_mbuf_dynflag desc_flag = {
|
|
|
|
.name = RTE_MBUF_DYNFLAG_METADATA_NAME,
|
|
|
|
};
|
|
|
|
|
|
|
|
offset = rte_mbuf_dynfield_register(&desc_offs);
|
|
|
|
if (offset < 0)
|
|
|
|
goto error;
|
|
|
|
flag = rte_mbuf_dynflag_register(&desc_flag);
|
|
|
|
if (flag < 0)
|
|
|
|
goto error;
|
|
|
|
rte_flow_dynf_metadata_offs = offset;
|
|
|
|
rte_flow_dynf_metadata_mask = (1ULL << flag);
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
error:
|
|
|
|
rte_flow_dynf_metadata_offs = -1;
|
|
|
|
rte_flow_dynf_metadata_mask = 0ULL;
|
|
|
|
return -rte_errno;
|
|
|
|
}
|
|
|
|
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
static inline void
|
|
|
|
fts_enter(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE))
|
|
|
|
pthread_mutex_lock(&dev->data->flow_ops_mutex);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
fts_exit(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE))
|
|
|
|
pthread_mutex_unlock(&dev->data->flow_ops_mutex);
|
|
|
|
}
|
|
|
|
|
2018-01-20 21:12:23 +00:00
|
|
|
static int
|
|
|
|
flow_err(uint16_t port_id, int ret, struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
if (ret == 0)
|
|
|
|
return 0;
|
|
|
|
if (rte_eth_dev_is_removed(port_id))
|
|
|
|
return rte_flow_error_set(error, EIO,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(EIO));
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-12-21 14:51:17 +00:00
|
|
|
/* Get generic flow operations structure from a port. */
|
|
|
|
const struct rte_flow_ops *
|
2017-09-29 07:17:24 +00:00
|
|
|
rte_flow_ops_get(uint16_t port_id, struct rte_flow_error *error)
|
2016-12-21 14:51:17 +00:00
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops;
|
|
|
|
int code;
|
|
|
|
|
|
|
|
if (unlikely(!rte_eth_dev_is_valid_port(port_id)))
|
|
|
|
code = ENODEV;
|
2021-03-21 09:00:00 +00:00
|
|
|
else if (unlikely(dev->dev_ops->flow_ops_get == NULL))
|
|
|
|
/* flow API not supported with this driver dev_ops */
|
2016-12-21 14:51:17 +00:00
|
|
|
code = ENOSYS;
|
|
|
|
else
|
2021-03-21 09:00:00 +00:00
|
|
|
code = dev->dev_ops->flow_ops_get(dev, &ops);
|
|
|
|
if (code == 0 && ops == NULL)
|
|
|
|
/* flow API not supported with this device */
|
|
|
|
code = ENOSYS;
|
|
|
|
|
|
|
|
if (code != 0) {
|
|
|
|
rte_flow_error_set(error, code, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(code));
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return ops;
|
2016-12-21 14:51:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Check whether a flow rule can be created on a given port. */
|
|
|
|
int
|
2017-10-06 12:32:33 +00:00
|
|
|
rte_flow_validate(uint16_t port_id,
|
2016-12-21 14:51:17 +00:00
|
|
|
const struct rte_flow_attr *attr,
|
|
|
|
const struct rte_flow_item pattern[],
|
|
|
|
const struct rte_flow_action actions[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2016-12-21 14:51:17 +00:00
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->validate)) {
|
|
|
|
fts_enter(dev);
|
|
|
|
ret = ops->validate(dev, attr, pattern, actions, error);
|
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2017-10-12 12:19:15 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
2016-12-21 14:51:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Create a flow rule on a given port. */
|
|
|
|
struct rte_flow *
|
2017-10-06 12:32:33 +00:00
|
|
|
rte_flow_create(uint16_t port_id,
|
2016-12-21 14:51:17 +00:00
|
|
|
const struct rte_flow_attr *attr,
|
|
|
|
const struct rte_flow_item pattern[],
|
|
|
|
const struct rte_flow_action actions[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
2018-01-20 21:12:23 +00:00
|
|
|
struct rte_flow *flow;
|
2016-12-21 14:51:17 +00:00
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return NULL;
|
2018-01-20 21:12:23 +00:00
|
|
|
if (likely(!!ops->create)) {
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
fts_enter(dev);
|
2018-01-20 21:12:23 +00:00
|
|
|
flow = ops->create(dev, attr, pattern, actions, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
fts_exit(dev);
|
2018-01-20 21:12:23 +00:00
|
|
|
if (flow == NULL)
|
|
|
|
flow_err(port_id, -rte_errno, error);
|
|
|
|
return flow;
|
|
|
|
}
|
2016-12-21 14:51:17 +00:00
|
|
|
rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Destroy a flow rule on a given port. */
|
|
|
|
int
|
2017-10-06 12:32:33 +00:00
|
|
|
rte_flow_destroy(uint16_t port_id,
|
2016-12-21 14:51:17 +00:00
|
|
|
struct rte_flow *flow,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2016-12-21 14:51:17 +00:00
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->destroy)) {
|
|
|
|
fts_enter(dev);
|
|
|
|
ret = ops->destroy(dev, flow, error);
|
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2017-10-12 12:19:15 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
2016-12-21 14:51:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Destroy all flow rules associated with a port. */
|
|
|
|
int
|
2017-10-06 12:32:33 +00:00
|
|
|
rte_flow_flush(uint16_t port_id,
|
2016-12-21 14:51:17 +00:00
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2016-12-21 14:51:17 +00:00
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->flush)) {
|
|
|
|
fts_enter(dev);
|
|
|
|
ret = ops->flush(dev, error);
|
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2017-10-12 12:19:15 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
2016-12-21 14:51:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Query an existing flow rule. */
|
|
|
|
int
|
2017-10-06 12:32:33 +00:00
|
|
|
rte_flow_query(uint16_t port_id,
|
2016-12-21 14:51:17 +00:00
|
|
|
struct rte_flow *flow,
|
2018-04-26 17:29:19 +00:00
|
|
|
const struct rte_flow_action *action,
|
2016-12-21 14:51:17 +00:00
|
|
|
void *data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2016-12-21 14:51:17 +00:00
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->query)) {
|
|
|
|
fts_enter(dev);
|
|
|
|
ret = ops->query(dev, flow, action, data, error);
|
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2017-10-12 12:19:15 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
2016-12-21 14:51:17 +00:00
|
|
|
}
|
2017-06-14 14:48:51 +00:00
|
|
|
|
|
|
|
/* Restrict ingress traffic to the defined flow rules. */
|
|
|
|
int
|
2017-10-06 12:32:33 +00:00
|
|
|
rte_flow_isolate(uint16_t port_id,
|
2017-06-14 14:48:51 +00:00
|
|
|
int set,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2017-06-14 14:48:51 +00:00
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->isolate)) {
|
|
|
|
fts_enter(dev);
|
|
|
|
ret = ops->isolate(dev, set, error);
|
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2017-10-12 12:19:15 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize flow error structure. */
|
|
|
|
int
|
|
|
|
rte_flow_error_set(struct rte_flow_error *error,
|
|
|
|
int code,
|
|
|
|
enum rte_flow_error_type type,
|
|
|
|
const void *cause,
|
|
|
|
const char *message)
|
|
|
|
{
|
|
|
|
if (error) {
|
|
|
|
*error = (struct rte_flow_error){
|
|
|
|
.type = type,
|
|
|
|
.cause = cause,
|
|
|
|
.message = message,
|
|
|
|
};
|
|
|
|
}
|
|
|
|
rte_errno = code;
|
|
|
|
return -code;
|
2017-06-14 14:48:51 +00:00
|
|
|
}
|
2017-07-07 00:08:31 +00:00
|
|
|
|
2018-04-19 10:07:44 +00:00
|
|
|
/** Pattern item specification types. */
|
2018-08-31 09:01:00 +00:00
|
|
|
enum rte_flow_conv_item_spec_type {
|
|
|
|
RTE_FLOW_CONV_ITEM_SPEC,
|
|
|
|
RTE_FLOW_CONV_ITEM_LAST,
|
|
|
|
RTE_FLOW_CONV_ITEM_MASK,
|
2018-04-19 10:07:44 +00:00
|
|
|
};
|
|
|
|
|
2018-08-31 09:01:00 +00:00
|
|
|
/**
|
|
|
|
* Copy pattern item specification.
|
|
|
|
*
|
|
|
|
* @param[out] buf
|
|
|
|
* Output buffer. Can be NULL if @p size is zero.
|
|
|
|
* @param size
|
|
|
|
* Size of @p buf in bytes.
|
|
|
|
* @param[in] item
|
|
|
|
* Pattern item to copy specification from.
|
|
|
|
* @param type
|
|
|
|
* Specification selector for either @p spec, @p last or @p mask.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Number of bytes needed to store pattern item specification regardless
|
|
|
|
* of @p size. @p buf contents are truncated to @p size if not large
|
|
|
|
* enough.
|
|
|
|
*/
|
2018-04-19 10:07:44 +00:00
|
|
|
static size_t
|
2018-08-31 09:01:00 +00:00
|
|
|
rte_flow_conv_item_spec(void *buf, const size_t size,
|
|
|
|
const struct rte_flow_item *item,
|
|
|
|
enum rte_flow_conv_item_spec_type type)
|
2017-07-07 00:08:31 +00:00
|
|
|
{
|
2018-08-31 09:01:00 +00:00
|
|
|
size_t off;
|
2018-05-21 11:44:28 +00:00
|
|
|
const void *data =
|
2018-08-31 09:01:00 +00:00
|
|
|
type == RTE_FLOW_CONV_ITEM_SPEC ? item->spec :
|
|
|
|
type == RTE_FLOW_CONV_ITEM_LAST ? item->last :
|
|
|
|
type == RTE_FLOW_CONV_ITEM_MASK ? item->mask :
|
2018-04-19 10:07:44 +00:00
|
|
|
NULL;
|
|
|
|
|
2017-07-07 00:08:31 +00:00
|
|
|
switch (item->type) {
|
2018-05-21 11:44:28 +00:00
|
|
|
union {
|
|
|
|
const struct rte_flow_item_raw *raw;
|
|
|
|
} spec;
|
|
|
|
union {
|
|
|
|
const struct rte_flow_item_raw *raw;
|
|
|
|
} last;
|
|
|
|
union {
|
|
|
|
const struct rte_flow_item_raw *raw;
|
|
|
|
} mask;
|
2017-07-07 00:08:31 +00:00
|
|
|
union {
|
|
|
|
const struct rte_flow_item_raw *raw;
|
2018-04-19 10:07:44 +00:00
|
|
|
} src;
|
|
|
|
union {
|
|
|
|
struct rte_flow_item_raw *raw;
|
|
|
|
} dst;
|
2018-08-31 09:01:00 +00:00
|
|
|
size_t tmp;
|
2017-07-07 00:08:31 +00:00
|
|
|
|
|
|
|
case RTE_FLOW_ITEM_TYPE_RAW:
|
2018-05-21 11:44:28 +00:00
|
|
|
spec.raw = item->spec;
|
|
|
|
last.raw = item->last ? item->last : item->spec;
|
|
|
|
mask.raw = item->mask ? item->mask : &rte_flow_item_raw_mask;
|
|
|
|
src.raw = data;
|
2018-04-19 10:07:44 +00:00
|
|
|
dst.raw = buf;
|
2018-08-31 09:01:00 +00:00
|
|
|
rte_memcpy(dst.raw,
|
|
|
|
(&(struct rte_flow_item_raw){
|
|
|
|
.relative = src.raw->relative,
|
|
|
|
.search = src.raw->search,
|
|
|
|
.reserved = src.raw->reserved,
|
|
|
|
.offset = src.raw->offset,
|
|
|
|
.limit = src.raw->limit,
|
|
|
|
.length = src.raw->length,
|
|
|
|
}),
|
|
|
|
size > sizeof(*dst.raw) ? sizeof(*dst.raw) : size);
|
|
|
|
off = sizeof(*dst.raw);
|
|
|
|
if (type == RTE_FLOW_CONV_ITEM_SPEC ||
|
|
|
|
(type == RTE_FLOW_CONV_ITEM_MASK &&
|
2018-05-21 11:44:28 +00:00
|
|
|
((spec.raw->length & mask.raw->length) >=
|
|
|
|
(last.raw->length & mask.raw->length))))
|
2018-08-31 09:01:00 +00:00
|
|
|
tmp = spec.raw->length & mask.raw->length;
|
2018-05-21 11:44:28 +00:00
|
|
|
else
|
2018-08-31 09:01:00 +00:00
|
|
|
tmp = last.raw->length & mask.raw->length;
|
|
|
|
if (tmp) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(*dst.raw->pattern));
|
|
|
|
if (size >= off + tmp)
|
|
|
|
dst.raw->pattern = rte_memcpy
|
|
|
|
((void *)((uintptr_t)dst.raw + off),
|
|
|
|
src.raw->pattern, tmp);
|
|
|
|
off += tmp;
|
2018-04-25 15:27:48 +00:00
|
|
|
}
|
2017-07-07 00:08:31 +00:00
|
|
|
break;
|
|
|
|
default:
|
2020-10-16 12:51:05 +00:00
|
|
|
/**
|
|
|
|
* allow PMD private flow item
|
|
|
|
*/
|
|
|
|
off = (int)item->type >= 0 ?
|
|
|
|
rte_flow_desc_item[item->type].size : sizeof(void *);
|
2018-08-31 09:01:00 +00:00
|
|
|
rte_memcpy(buf, data, (size > off ? off : size));
|
2017-07-07 00:08:31 +00:00
|
|
|
break;
|
|
|
|
}
|
2018-08-31 09:01:00 +00:00
|
|
|
return off;
|
2017-07-07 00:08:31 +00:00
|
|
|
}
|
|
|
|
|
2018-08-31 09:01:00 +00:00
|
|
|
/**
|
|
|
|
* Copy action configuration.
|
|
|
|
*
|
|
|
|
* @param[out] buf
|
|
|
|
* Output buffer. Can be NULL if @p size is zero.
|
|
|
|
* @param size
|
|
|
|
* Size of @p buf in bytes.
|
|
|
|
* @param[in] action
|
|
|
|
* Action to copy configuration from.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Number of bytes needed to store pattern item specification regardless
|
|
|
|
* of @p size. @p buf contents are truncated to @p size if not large
|
|
|
|
* enough.
|
|
|
|
*/
|
2018-04-19 10:07:44 +00:00
|
|
|
static size_t
|
2018-08-31 09:01:00 +00:00
|
|
|
rte_flow_conv_action_conf(void *buf, const size_t size,
|
|
|
|
const struct rte_flow_action *action)
|
2017-07-07 00:08:31 +00:00
|
|
|
{
|
2018-08-31 09:01:00 +00:00
|
|
|
size_t off;
|
2018-04-19 10:07:44 +00:00
|
|
|
|
2017-07-07 00:08:31 +00:00
|
|
|
switch (action->type) {
|
|
|
|
union {
|
|
|
|
const struct rte_flow_action_rss *rss;
|
2018-08-31 09:01:11 +00:00
|
|
|
const struct rte_flow_action_vxlan_encap *vxlan_encap;
|
|
|
|
const struct rte_flow_action_nvgre_encap *nvgre_encap;
|
2018-04-19 10:07:44 +00:00
|
|
|
} src;
|
|
|
|
union {
|
|
|
|
struct rte_flow_action_rss *rss;
|
2018-08-31 09:01:11 +00:00
|
|
|
struct rte_flow_action_vxlan_encap *vxlan_encap;
|
|
|
|
struct rte_flow_action_nvgre_encap *nvgre_encap;
|
2018-04-19 10:07:44 +00:00
|
|
|
} dst;
|
2018-08-31 09:01:00 +00:00
|
|
|
size_t tmp;
|
2018-08-31 09:01:11 +00:00
|
|
|
int ret;
|
2017-07-07 00:08:31 +00:00
|
|
|
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RSS:
|
2018-04-19 10:07:44 +00:00
|
|
|
src.rss = action->conf;
|
|
|
|
dst.rss = buf;
|
2018-08-31 09:01:00 +00:00
|
|
|
rte_memcpy(dst.rss,
|
|
|
|
(&(struct rte_flow_action_rss){
|
2018-04-25 15:27:52 +00:00
|
|
|
.func = src.rss->func,
|
2018-04-25 15:27:54 +00:00
|
|
|
.level = src.rss->level,
|
2018-04-25 15:27:50 +00:00
|
|
|
.types = src.rss->types,
|
|
|
|
.key_len = src.rss->key_len,
|
|
|
|
.queue_num = src.rss->queue_num,
|
2018-08-31 09:01:00 +00:00
|
|
|
}),
|
|
|
|
size > sizeof(*dst.rss) ? sizeof(*dst.rss) : size);
|
|
|
|
off = sizeof(*dst.rss);
|
app/testpmd: fix RSS key for flow API RSS rule
When a flow API RSS rule is issued in testpmd, device RSS key is changed
unexpectedly, device RSS key is changed to the testpmd default RSS key.
Consider the following usage with testpmd:
1. first, startup testpmd:
testpmd> show port 0 rss-hash key
RSS functions: all ipv4-frag ipv4-other ipv6-frag ipv6-other ip
RSS key: 6D5A56DA255B0EC24167253D43A38FB0D0CA2BCBAE7B30B477CB2DA38030F
20C6A42B73BBEAC01FA
2. create a rss rule
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end \
actions rss types ipv4-udp end queues end / end
3. show rss-hash key
testpmd> show port 0 rss-hash key
RSS functions: all ipv4-udp udp
RSS key: 74657374706D6427732064656661756C74205253532068617368206B65792
C206F76657272696465
This is because testpmd always sends a key with the RSS rule,
if user provides a key as part of the rule that key is used, if user
doesn't provide a key, testpmd default key is sent to the PMDs, which is
causing device programmed RSS key to be changed.
There was a previous attempt to fix the same issue [1], but it has been
reverted back [2] because of the crash when 'key_len' is provided
without 'key'.
This patch follows the same approach with the initial fix [1] but also
addresses the crash.
After change, testpmd RSS key is 'NULL' by default, if user provides a
key as part of rule it is used, if not no key is sent to the PMDs at all
[1]
Commit a4391f8bae85 ("app/testpmd: set default RSS key as null")
[2]
Commit f3698c3d09a6 ("app/testpmd: revert setting default RSS")
Fixes: d0ad8648b1c5 ("app/testpmd: fix RSS flow action configuration")
Cc: stable@dpdk.org
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-21 10:07:10 +00:00
|
|
|
if (src.rss->key_len && src.rss->key) {
|
2018-08-31 09:01:00 +00:00
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(*dst.rss->key));
|
|
|
|
tmp = sizeof(*src.rss->key) * src.rss->key_len;
|
|
|
|
if (size >= off + tmp)
|
|
|
|
dst.rss->key = rte_memcpy
|
2018-04-25 15:27:48 +00:00
|
|
|
((void *)((uintptr_t)dst.rss + off),
|
2018-08-31 09:01:00 +00:00
|
|
|
src.rss->key, tmp);
|
|
|
|
off += tmp;
|
2018-04-19 10:07:44 +00:00
|
|
|
}
|
2018-04-25 15:27:50 +00:00
|
|
|
if (src.rss->queue_num) {
|
2018-08-31 09:01:00 +00:00
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(*dst.rss->queue));
|
|
|
|
tmp = sizeof(*src.rss->queue) * src.rss->queue_num;
|
|
|
|
if (size >= off + tmp)
|
|
|
|
dst.rss->queue = rte_memcpy
|
2018-04-25 15:27:50 +00:00
|
|
|
((void *)((uintptr_t)dst.rss + off),
|
2018-08-31 09:01:00 +00:00
|
|
|
src.rss->queue, tmp);
|
|
|
|
off += tmp;
|
2018-04-19 10:07:44 +00:00
|
|
|
}
|
2017-07-07 00:08:31 +00:00
|
|
|
break;
|
2018-08-31 09:01:11 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
|
|
|
|
src.vxlan_encap = action->conf;
|
|
|
|
dst.vxlan_encap = buf;
|
|
|
|
RTE_BUILD_BUG_ON(sizeof(*src.vxlan_encap) !=
|
|
|
|
sizeof(*src.nvgre_encap) ||
|
|
|
|
offsetof(struct rte_flow_action_vxlan_encap,
|
|
|
|
definition) !=
|
|
|
|
offsetof(struct rte_flow_action_nvgre_encap,
|
|
|
|
definition));
|
|
|
|
off = sizeof(*dst.vxlan_encap);
|
|
|
|
if (src.vxlan_encap->definition) {
|
|
|
|
off = RTE_ALIGN_CEIL
|
|
|
|
(off, sizeof(*dst.vxlan_encap->definition));
|
|
|
|
ret = rte_flow_conv
|
|
|
|
(RTE_FLOW_CONV_OP_PATTERN,
|
|
|
|
(void *)((uintptr_t)dst.vxlan_encap + off),
|
|
|
|
size > off ? size - off : 0,
|
|
|
|
src.vxlan_encap->definition, NULL);
|
|
|
|
if (ret < 0)
|
|
|
|
return 0;
|
|
|
|
if (size >= off + ret)
|
|
|
|
dst.vxlan_encap->definition =
|
|
|
|
(void *)((uintptr_t)dst.vxlan_encap +
|
|
|
|
off);
|
|
|
|
off += ret;
|
|
|
|
}
|
|
|
|
break;
|
2017-07-07 00:08:31 +00:00
|
|
|
default:
|
2020-10-16 12:51:05 +00:00
|
|
|
/**
|
|
|
|
* allow PMD private flow action
|
|
|
|
*/
|
|
|
|
off = (int)action->type >= 0 ?
|
|
|
|
rte_flow_desc_action[action->type].size : sizeof(void *);
|
2018-08-31 09:01:00 +00:00
|
|
|
rte_memcpy(buf, action->conf, (size > off ? off : size));
|
2017-07-07 00:08:31 +00:00
|
|
|
break;
|
|
|
|
}
|
2018-08-31 09:01:00 +00:00
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Copy a list of pattern items.
|
|
|
|
*
|
|
|
|
* @param[out] dst
|
|
|
|
* Destination buffer. Can be NULL if @p size is zero.
|
|
|
|
* @param size
|
|
|
|
* Size of @p dst in bytes.
|
|
|
|
* @param[in] src
|
|
|
|
* Source pattern items.
|
|
|
|
* @param num
|
|
|
|
* Maximum number of pattern items to process from @p src or 0 to process
|
|
|
|
* the entire list. In both cases, processing stops after
|
|
|
|
* RTE_FLOW_ITEM_TYPE_END is encountered.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* A positive value representing the number of bytes needed to store
|
|
|
|
* pattern items regardless of @p size on success (@p buf contents are
|
|
|
|
* truncated to @p size if not large enough), a negative errno value
|
|
|
|
* otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
rte_flow_conv_pattern(struct rte_flow_item *dst,
|
|
|
|
const size_t size,
|
|
|
|
const struct rte_flow_item *src,
|
|
|
|
unsigned int num,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
uintptr_t data = (uintptr_t)dst;
|
|
|
|
size_t off;
|
|
|
|
size_t ret;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0, off = 0; !num || i != num; ++i, ++src, ++dst) {
|
2020-10-16 12:51:05 +00:00
|
|
|
/**
|
|
|
|
* allow PMD private flow item
|
|
|
|
*/
|
|
|
|
if (((int)src->type >= 0) &&
|
|
|
|
((size_t)src->type >= RTE_DIM(rte_flow_desc_item) ||
|
|
|
|
!rte_flow_desc_item[src->type].name))
|
2018-08-31 09:01:00 +00:00
|
|
|
return rte_flow_error_set
|
|
|
|
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, src,
|
|
|
|
"cannot convert unknown item type");
|
|
|
|
if (size >= off + sizeof(*dst))
|
|
|
|
*dst = (struct rte_flow_item){
|
|
|
|
.type = src->type,
|
|
|
|
};
|
|
|
|
off += sizeof(*dst);
|
|
|
|
if (!src->type)
|
|
|
|
num = i + 1;
|
|
|
|
}
|
|
|
|
num = i;
|
|
|
|
src -= num;
|
|
|
|
dst -= num;
|
|
|
|
do {
|
|
|
|
if (src->spec) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
ret = rte_flow_conv_item_spec
|
|
|
|
((void *)(data + off),
|
|
|
|
size > off ? size - off : 0, src,
|
|
|
|
RTE_FLOW_CONV_ITEM_SPEC);
|
|
|
|
if (size && size >= off + ret)
|
|
|
|
dst->spec = (void *)(data + off);
|
|
|
|
off += ret;
|
|
|
|
|
|
|
|
}
|
|
|
|
if (src->last) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
ret = rte_flow_conv_item_spec
|
|
|
|
((void *)(data + off),
|
|
|
|
size > off ? size - off : 0, src,
|
|
|
|
RTE_FLOW_CONV_ITEM_LAST);
|
|
|
|
if (size && size >= off + ret)
|
|
|
|
dst->last = (void *)(data + off);
|
|
|
|
off += ret;
|
|
|
|
}
|
|
|
|
if (src->mask) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
ret = rte_flow_conv_item_spec
|
|
|
|
((void *)(data + off),
|
|
|
|
size > off ? size - off : 0, src,
|
|
|
|
RTE_FLOW_CONV_ITEM_MASK);
|
|
|
|
if (size && size >= off + ret)
|
|
|
|
dst->mask = (void *)(data + off);
|
|
|
|
off += ret;
|
|
|
|
}
|
|
|
|
++src;
|
|
|
|
++dst;
|
|
|
|
} while (--num);
|
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Copy a list of actions.
|
|
|
|
*
|
|
|
|
* @param[out] dst
|
|
|
|
* Destination buffer. Can be NULL if @p size is zero.
|
|
|
|
* @param size
|
|
|
|
* Size of @p dst in bytes.
|
|
|
|
* @param[in] src
|
|
|
|
* Source actions.
|
|
|
|
* @param num
|
|
|
|
* Maximum number of actions to process from @p src or 0 to process the
|
|
|
|
* entire list. In both cases, processing stops after
|
|
|
|
* RTE_FLOW_ACTION_TYPE_END is encountered.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* A positive value representing the number of bytes needed to store
|
|
|
|
* actions regardless of @p size on success (@p buf contents are truncated
|
|
|
|
* to @p size if not large enough), a negative errno value otherwise and
|
|
|
|
* rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
rte_flow_conv_actions(struct rte_flow_action *dst,
|
|
|
|
const size_t size,
|
|
|
|
const struct rte_flow_action *src,
|
|
|
|
unsigned int num,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
uintptr_t data = (uintptr_t)dst;
|
|
|
|
size_t off;
|
|
|
|
size_t ret;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0, off = 0; !num || i != num; ++i, ++src, ++dst) {
|
2020-10-16 12:51:05 +00:00
|
|
|
/**
|
|
|
|
* allow PMD private flow action
|
|
|
|
*/
|
|
|
|
if (((int)src->type >= 0) &&
|
|
|
|
((size_t)src->type >= RTE_DIM(rte_flow_desc_action) ||
|
|
|
|
!rte_flow_desc_action[src->type].name))
|
2018-08-31 09:01:00 +00:00
|
|
|
return rte_flow_error_set
|
|
|
|
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
src, "cannot convert unknown action type");
|
|
|
|
if (size >= off + sizeof(*dst))
|
|
|
|
*dst = (struct rte_flow_action){
|
|
|
|
.type = src->type,
|
|
|
|
};
|
|
|
|
off += sizeof(*dst);
|
|
|
|
if (!src->type)
|
|
|
|
num = i + 1;
|
|
|
|
}
|
|
|
|
num = i;
|
|
|
|
src -= num;
|
|
|
|
dst -= num;
|
|
|
|
do {
|
|
|
|
if (src->conf) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
ret = rte_flow_conv_action_conf
|
|
|
|
((void *)(data + off),
|
|
|
|
size > off ? size - off : 0, src);
|
|
|
|
if (size && size >= off + ret)
|
|
|
|
dst->conf = (void *)(data + off);
|
|
|
|
off += ret;
|
|
|
|
}
|
|
|
|
++src;
|
|
|
|
++dst;
|
|
|
|
} while (--num);
|
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Copy flow rule components.
|
|
|
|
*
|
|
|
|
* This comprises the flow rule descriptor itself, attributes, pattern and
|
|
|
|
* actions list. NULL components in @p src are skipped.
|
|
|
|
*
|
|
|
|
* @param[out] dst
|
|
|
|
* Destination buffer. Can be NULL if @p size is zero.
|
|
|
|
* @param size
|
|
|
|
* Size of @p dst in bytes.
|
|
|
|
* @param[in] src
|
|
|
|
* Source flow rule descriptor.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* A positive value representing the number of bytes needed to store all
|
|
|
|
* components including the descriptor regardless of @p size on success
|
|
|
|
* (@p buf contents are truncated to @p size if not large enough), a
|
|
|
|
* negative errno value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
rte_flow_conv_rule(struct rte_flow_conv_rule *dst,
|
|
|
|
const size_t size,
|
|
|
|
const struct rte_flow_conv_rule *src,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
size_t off;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
rte_memcpy(dst,
|
|
|
|
(&(struct rte_flow_conv_rule){
|
|
|
|
.attr = NULL,
|
|
|
|
.pattern = NULL,
|
|
|
|
.actions = NULL,
|
|
|
|
}),
|
|
|
|
size > sizeof(*dst) ? sizeof(*dst) : size);
|
|
|
|
off = sizeof(*dst);
|
|
|
|
if (src->attr_ro) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
if (size && size >= off + sizeof(*dst->attr))
|
|
|
|
dst->attr = rte_memcpy
|
|
|
|
((void *)((uintptr_t)dst + off),
|
|
|
|
src->attr_ro, sizeof(*dst->attr));
|
|
|
|
off += sizeof(*dst->attr);
|
|
|
|
}
|
|
|
|
if (src->pattern_ro) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
ret = rte_flow_conv_pattern((void *)((uintptr_t)dst + off),
|
|
|
|
size > off ? size - off : 0,
|
|
|
|
src->pattern_ro, 0, error);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
if (size && size >= off + (size_t)ret)
|
|
|
|
dst->pattern = (void *)((uintptr_t)dst + off);
|
|
|
|
off += ret;
|
|
|
|
}
|
|
|
|
if (src->actions_ro) {
|
|
|
|
off = RTE_ALIGN_CEIL(off, sizeof(double));
|
|
|
|
ret = rte_flow_conv_actions((void *)((uintptr_t)dst + off),
|
|
|
|
size > off ? size - off : 0,
|
|
|
|
src->actions_ro, 0, error);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
if (size >= off + (size_t)ret)
|
|
|
|
dst->actions = (void *)((uintptr_t)dst + off);
|
|
|
|
off += ret;
|
|
|
|
}
|
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
2018-08-31 09:01:02 +00:00
|
|
|
/**
|
|
|
|
* Retrieve the name of a pattern item/action type.
|
|
|
|
*
|
|
|
|
* @param is_action
|
|
|
|
* Nonzero when @p src represents an action type instead of a pattern item
|
|
|
|
* type.
|
|
|
|
* @param is_ptr
|
|
|
|
* Nonzero to write string address instead of contents into @p dst.
|
|
|
|
* @param[out] dst
|
|
|
|
* Destination buffer. Can be NULL if @p size is zero.
|
|
|
|
* @param size
|
|
|
|
* Size of @p dst in bytes.
|
|
|
|
* @param[in] src
|
|
|
|
* Depending on @p is_action, source pattern item or action type cast as a
|
|
|
|
* pointer.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* A positive value representing the number of bytes needed to store the
|
|
|
|
* name or its address regardless of @p size on success (@p buf contents
|
|
|
|
* are truncated to @p size if not large enough), a negative errno value
|
|
|
|
* otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
rte_flow_conv_name(int is_action,
|
|
|
|
int is_ptr,
|
|
|
|
char *dst,
|
|
|
|
const size_t size,
|
|
|
|
const void *src,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct desc_info {
|
|
|
|
const struct rte_flow_desc_data *data;
|
|
|
|
size_t num;
|
|
|
|
};
|
|
|
|
static const struct desc_info info_rep[2] = {
|
|
|
|
{ rte_flow_desc_item, RTE_DIM(rte_flow_desc_item), },
|
|
|
|
{ rte_flow_desc_action, RTE_DIM(rte_flow_desc_action), },
|
|
|
|
};
|
|
|
|
const struct desc_info *const info = &info_rep[!!is_action];
|
|
|
|
unsigned int type = (uintptr_t)src;
|
|
|
|
|
|
|
|
if (type >= info->num)
|
|
|
|
return rte_flow_error_set
|
|
|
|
(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"unknown object type to retrieve the name of");
|
|
|
|
if (!is_ptr)
|
|
|
|
return strlcpy(dst, info->data[type].name, size);
|
|
|
|
if (size >= sizeof(const char **))
|
|
|
|
*((const char **)dst) = info->data[type].name;
|
|
|
|
return sizeof(const char **);
|
|
|
|
}
|
|
|
|
|
2018-08-31 09:01:00 +00:00
|
|
|
/** Helper function to convert flow API objects. */
|
|
|
|
int
|
|
|
|
rte_flow_conv(enum rte_flow_conv_op op,
|
|
|
|
void *dst,
|
|
|
|
size_t size,
|
|
|
|
const void *src,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
switch (op) {
|
|
|
|
const struct rte_flow_attr *attr;
|
|
|
|
|
|
|
|
case RTE_FLOW_CONV_OP_NONE:
|
|
|
|
return 0;
|
|
|
|
case RTE_FLOW_CONV_OP_ATTR:
|
|
|
|
attr = src;
|
|
|
|
if (size > sizeof(*attr))
|
|
|
|
size = sizeof(*attr);
|
|
|
|
rte_memcpy(dst, attr, size);
|
|
|
|
return sizeof(*attr);
|
|
|
|
case RTE_FLOW_CONV_OP_ITEM:
|
|
|
|
return rte_flow_conv_pattern(dst, size, src, 1, error);
|
|
|
|
case RTE_FLOW_CONV_OP_ACTION:
|
|
|
|
return rte_flow_conv_actions(dst, size, src, 1, error);
|
|
|
|
case RTE_FLOW_CONV_OP_PATTERN:
|
|
|
|
return rte_flow_conv_pattern(dst, size, src, 0, error);
|
|
|
|
case RTE_FLOW_CONV_OP_ACTIONS:
|
|
|
|
return rte_flow_conv_actions(dst, size, src, 0, error);
|
|
|
|
case RTE_FLOW_CONV_OP_RULE:
|
|
|
|
return rte_flow_conv_rule(dst, size, src, error);
|
2018-08-31 09:01:02 +00:00
|
|
|
case RTE_FLOW_CONV_OP_ITEM_NAME:
|
|
|
|
return rte_flow_conv_name(0, 0, dst, size, src, error);
|
|
|
|
case RTE_FLOW_CONV_OP_ACTION_NAME:
|
|
|
|
return rte_flow_conv_name(1, 0, dst, size, src, error);
|
|
|
|
case RTE_FLOW_CONV_OP_ITEM_NAME_PTR:
|
|
|
|
return rte_flow_conv_name(0, 1, dst, size, src, error);
|
|
|
|
case RTE_FLOW_CONV_OP_ACTION_NAME_PTR:
|
|
|
|
return rte_flow_conv_name(1, 1, dst, size, src, error);
|
2018-08-31 09:01:00 +00:00
|
|
|
}
|
|
|
|
return rte_flow_error_set
|
|
|
|
(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"unknown object conversion operation");
|
2017-07-07 00:08:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/** Store a full rte_flow description. */
|
|
|
|
size_t
|
|
|
|
rte_flow_copy(struct rte_flow_desc *desc, size_t len,
|
|
|
|
const struct rte_flow_attr *attr,
|
|
|
|
const struct rte_flow_item *items,
|
|
|
|
const struct rte_flow_action *actions)
|
|
|
|
{
|
2018-08-31 09:01:00 +00:00
|
|
|
/*
|
|
|
|
* Overlap struct rte_flow_conv with struct rte_flow_desc in order
|
|
|
|
* to convert the former to the latter without wasting space.
|
|
|
|
*/
|
|
|
|
struct rte_flow_conv_rule *dst =
|
|
|
|
len ?
|
|
|
|
(void *)((uintptr_t)desc +
|
|
|
|
(offsetof(struct rte_flow_desc, actions) -
|
|
|
|
offsetof(struct rte_flow_conv_rule, actions))) :
|
|
|
|
NULL;
|
|
|
|
size_t dst_size =
|
|
|
|
len > sizeof(*desc) - sizeof(*dst) ?
|
|
|
|
len - (sizeof(*desc) - sizeof(*dst)) :
|
|
|
|
0;
|
|
|
|
struct rte_flow_conv_rule src = {
|
|
|
|
.attr_ro = NULL,
|
|
|
|
.pattern_ro = items,
|
|
|
|
.actions_ro = actions,
|
|
|
|
};
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
RTE_BUILD_BUG_ON(sizeof(struct rte_flow_desc) <
|
|
|
|
sizeof(struct rte_flow_conv_rule));
|
|
|
|
if (dst_size &&
|
|
|
|
(&dst->pattern != &desc->items ||
|
|
|
|
&dst->actions != &desc->actions ||
|
|
|
|
(uintptr_t)(dst + 1) != (uintptr_t)(desc + 1))) {
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
return 0;
|
2017-07-07 00:08:31 +00:00
|
|
|
}
|
2018-08-31 09:01:00 +00:00
|
|
|
ret = rte_flow_conv(RTE_FLOW_CONV_OP_RULE, dst, dst_size, &src, NULL);
|
|
|
|
if (ret < 0)
|
|
|
|
return 0;
|
|
|
|
ret += sizeof(*desc) - sizeof(*dst);
|
|
|
|
rte_memcpy(desc,
|
|
|
|
(&(struct rte_flow_desc){
|
|
|
|
.size = ret,
|
2017-07-07 00:08:31 +00:00
|
|
|
.attr = *attr,
|
2018-08-31 09:01:00 +00:00
|
|
|
.items = dst_size ? dst->pattern : NULL,
|
|
|
|
.actions = dst_size ? dst->actions : NULL,
|
|
|
|
}),
|
|
|
|
len > sizeof(*desc) ? sizeof(*desc) : len);
|
|
|
|
return ret;
|
2017-07-07 00:08:31 +00:00
|
|
|
}
|
2018-06-28 16:01:21 +00:00
|
|
|
|
2020-01-17 11:55:59 +00:00
|
|
|
int
|
2021-04-14 10:19:59 +00:00
|
|
|
rte_flow_dev_dump(uint16_t port_id, struct rte_flow *flow,
|
|
|
|
FILE *file, struct rte_flow_error *error)
|
2020-01-17 11:55:59 +00:00
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2020-01-17 11:55:59 +00:00
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->dev_dump)) {
|
|
|
|
fts_enter(dev);
|
2021-04-14 10:19:59 +00:00
|
|
|
ret = ops->dev_dump(dev, flow, file, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2020-01-17 11:55:59 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
|
|
|
}
|
2020-04-21 10:11:38 +00:00
|
|
|
|
|
|
|
int
|
|
|
|
rte_flow_get_aged_flows(uint16_t port_id, void **contexts,
|
|
|
|
uint32_t nb_contexts, struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
int ret;
|
2020-04-21 10:11:38 +00:00
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.
For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.
And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.
This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.
A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.
For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.
The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-15 01:07:47 +00:00
|
|
|
if (likely(!!ops->get_aged_flows)) {
|
|
|
|
fts_enter(dev);
|
|
|
|
ret = ops->get_aged_flows(dev, contexts, nb_contexts, error);
|
|
|
|
fts_exit(dev);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
2020-04-21 10:11:38 +00:00
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOTSUP));
|
|
|
|
}
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
struct rte_flow_action_handle *
|
|
|
|
rte_flow_action_handle_create(uint16_t port_id,
|
|
|
|
const struct rte_flow_indir_action_conf *conf,
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
const struct rte_flow_action *action,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
struct rte_flow_action_handle *handle;
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return NULL;
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
if (unlikely(!ops->action_handle_create)) {
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
rte_strerror(ENOSYS));
|
|
|
|
return NULL;
|
|
|
|
}
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
handle = ops->action_handle_create(&rte_eth_devices[port_id],
|
|
|
|
conf, action, error);
|
|
|
|
if (handle == NULL)
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
flow_err(port_id, -rte_errno, error);
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
return handle;
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
rte_flow_action_handle_destroy(uint16_t port_id,
|
|
|
|
struct rte_flow_action_handle *handle,
|
|
|
|
struct rte_flow_error *error)
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
if (unlikely(!ops->action_handle_destroy))
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
ret = ops->action_handle_destroy(&rte_eth_devices[port_id],
|
|
|
|
handle, error);
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
rte_flow_action_handle_update(uint16_t port_id,
|
|
|
|
struct rte_flow_action_handle *handle,
|
|
|
|
const void *update,
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
if (unlikely(!ops->action_handle_update))
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
ret = ops->action_handle_update(&rte_eth_devices[port_id], handle,
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
update, error);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
rte_flow_action_handle_query(uint16_t port_id,
|
|
|
|
const struct rte_flow_action_handle *handle,
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
void *data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
if (unlikely(!ops->action_handle_query))
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
return rte_flow_error_set(error, ENOSYS,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOSYS));
|
ethdev: introduce indirect flow action
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2021-04-19 14:38:29 +00:00
|
|
|
ret = ops->action_handle_query(&rte_eth_devices[port_id], handle,
|
ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.
Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action
API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
flows
Change description
===
Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.
Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
for shared action implementation.
- make necessary preparations to maintain shared access to
the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.
In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).
If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.
Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.
Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.
example
===
struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
* skipped: initialize updated_action according to desired action
* configuration change
*/
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
* from now on all flows 1 till N will act according to configuration of
* updated_action
*/
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-14 11:40:14 +00:00
|
|
|
data, error);
|
|
|
|
return flow_err(port_id, ret, error);
|
|
|
|
}
|
ethdev: add tunnel offload model
rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects. The hardware model design for
high-level software objects is not trivial. Furthermore, an optimal
design is often vendor-specific.
When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.
There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor. For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.
The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.
The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.
For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.
VXLAN Code example:
Assume application needs to do inner NAT on the VXLAN packet.
The first rule in group 0:
flow create <port id> ingress group 0
pattern eth / ipv4 / udp dst is 4789 / vxlan / end
actions {pmd actions} / jump group 3 / end
The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3. In group 3 the packet will miss since there is no
flow to match and will be sent to the application. Application will
call rte_flow_get_restore_info() to get the packet outer header.
Application will insert a new rule in group 3 to match outer and inner
headers:
flow create <port id> ingress group 3
pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
udp dst 4789 / vxlan vni is 10 /
ipv4 dst is 184.1.2.3 / end
actions set_ipv4_dst 186.1.1.1 / queue index 3 / end
Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1
Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on. It's PMD
responsibility to translate these items to outer metadata.
API usage:
/**
* 1. Initiate RTE flow tunnel object
*/
const struct rte_flow_tunnel tunnel = {
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
.tun_id = 10,
}
/**
* 2. Obtain PMD tunnel actions
*
* pmd_actions is an intermediate variable application uses to
* compile actions array
*/
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
&num_pmd_actions, &error);
/**
* 3. offload the first rule
* matching on VXLAN traffic and jumps to group 3
* (implicitly decaps packet)
*/
app_actions = jump group 3
rule_items = app_items; /** eth / ipv4 / udp / vxlan */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 4. after flow creation application does not need to keep the
* tunnel action resources.
*/
rte_flow_tunnel_action_release(port_id, pmd_actions,
num_pmd_actions);
/**
* 5. After partially offloaded packet miss because there was no
* matching rule handle miss on group 3
*/
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);
/**
* 6. Offload NAT rule:
*/
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
rte_flow_tunnel_match(&info.tunnel, &pmd_items,
&num_pmd_items, &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 7. Release PMD items after rule creation
*/
rte_flow_tunnel_item_release(port_id,
pmd_items, num_pmd_items);
References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 12:51:06 +00:00
|
|
|
|
|
|
|
int
|
|
|
|
rte_flow_tunnel_decap_set(uint16_t port_id,
|
|
|
|
struct rte_flow_tunnel *tunnel,
|
|
|
|
struct rte_flow_action **actions,
|
|
|
|
uint32_t *num_of_actions,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
|
|
|
if (likely(!!ops->tunnel_decap_set)) {
|
|
|
|
return flow_err(port_id,
|
|
|
|
ops->tunnel_decap_set(dev, tunnel, actions,
|
|
|
|
num_of_actions, error),
|
|
|
|
error);
|
|
|
|
}
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOTSUP));
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_flow_tunnel_match(uint16_t port_id,
|
|
|
|
struct rte_flow_tunnel *tunnel,
|
|
|
|
struct rte_flow_item **items,
|
|
|
|
uint32_t *num_of_items,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
|
|
|
if (likely(!!ops->tunnel_match)) {
|
|
|
|
return flow_err(port_id,
|
|
|
|
ops->tunnel_match(dev, tunnel, items,
|
|
|
|
num_of_items, error),
|
|
|
|
error);
|
|
|
|
}
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOTSUP));
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_flow_get_restore_info(uint16_t port_id,
|
|
|
|
struct rte_mbuf *m,
|
|
|
|
struct rte_flow_restore_info *restore_info,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
|
|
|
if (likely(!!ops->get_restore_info)) {
|
|
|
|
return flow_err(port_id,
|
|
|
|
ops->get_restore_info(dev, m, restore_info,
|
|
|
|
error),
|
|
|
|
error);
|
|
|
|
}
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOTSUP));
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_flow_tunnel_action_decap_release(uint16_t port_id,
|
|
|
|
struct rte_flow_action *actions,
|
|
|
|
uint32_t num_of_actions,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
2020-10-18 12:15:23 +00:00
|
|
|
if (likely(!!ops->tunnel_action_decap_release)) {
|
ethdev: add tunnel offload model
rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects. The hardware model design for
high-level software objects is not trivial. Furthermore, an optimal
design is often vendor-specific.
When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.
There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor. For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.
The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.
The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.
For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.
VXLAN Code example:
Assume application needs to do inner NAT on the VXLAN packet.
The first rule in group 0:
flow create <port id> ingress group 0
pattern eth / ipv4 / udp dst is 4789 / vxlan / end
actions {pmd actions} / jump group 3 / end
The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3. In group 3 the packet will miss since there is no
flow to match and will be sent to the application. Application will
call rte_flow_get_restore_info() to get the packet outer header.
Application will insert a new rule in group 3 to match outer and inner
headers:
flow create <port id> ingress group 3
pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
udp dst 4789 / vxlan vni is 10 /
ipv4 dst is 184.1.2.3 / end
actions set_ipv4_dst 186.1.1.1 / queue index 3 / end
Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1
Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on. It's PMD
responsibility to translate these items to outer metadata.
API usage:
/**
* 1. Initiate RTE flow tunnel object
*/
const struct rte_flow_tunnel tunnel = {
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
.tun_id = 10,
}
/**
* 2. Obtain PMD tunnel actions
*
* pmd_actions is an intermediate variable application uses to
* compile actions array
*/
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
&num_pmd_actions, &error);
/**
* 3. offload the first rule
* matching on VXLAN traffic and jumps to group 3
* (implicitly decaps packet)
*/
app_actions = jump group 3
rule_items = app_items; /** eth / ipv4 / udp / vxlan */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 4. after flow creation application does not need to keep the
* tunnel action resources.
*/
rte_flow_tunnel_action_release(port_id, pmd_actions,
num_pmd_actions);
/**
* 5. After partially offloaded packet miss because there was no
* matching rule handle miss on group 3
*/
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);
/**
* 6. Offload NAT rule:
*/
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
rte_flow_tunnel_match(&info.tunnel, &pmd_items,
&num_pmd_items, &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 7. Release PMD items after rule creation
*/
rte_flow_tunnel_item_release(port_id,
pmd_items, num_pmd_items);
References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 12:51:06 +00:00
|
|
|
return flow_err(port_id,
|
2020-10-18 12:15:23 +00:00
|
|
|
ops->tunnel_action_decap_release(dev, actions,
|
|
|
|
num_of_actions,
|
|
|
|
error),
|
ethdev: add tunnel offload model
rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects. The hardware model design for
high-level software objects is not trivial. Furthermore, an optimal
design is often vendor-specific.
When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.
There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor. For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.
The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.
The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.
For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.
VXLAN Code example:
Assume application needs to do inner NAT on the VXLAN packet.
The first rule in group 0:
flow create <port id> ingress group 0
pattern eth / ipv4 / udp dst is 4789 / vxlan / end
actions {pmd actions} / jump group 3 / end
The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3. In group 3 the packet will miss since there is no
flow to match and will be sent to the application. Application will
call rte_flow_get_restore_info() to get the packet outer header.
Application will insert a new rule in group 3 to match outer and inner
headers:
flow create <port id> ingress group 3
pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
udp dst 4789 / vxlan vni is 10 /
ipv4 dst is 184.1.2.3 / end
actions set_ipv4_dst 186.1.1.1 / queue index 3 / end
Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1
Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on. It's PMD
responsibility to translate these items to outer metadata.
API usage:
/**
* 1. Initiate RTE flow tunnel object
*/
const struct rte_flow_tunnel tunnel = {
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
.tun_id = 10,
}
/**
* 2. Obtain PMD tunnel actions
*
* pmd_actions is an intermediate variable application uses to
* compile actions array
*/
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
&num_pmd_actions, &error);
/**
* 3. offload the first rule
* matching on VXLAN traffic and jumps to group 3
* (implicitly decaps packet)
*/
app_actions = jump group 3
rule_items = app_items; /** eth / ipv4 / udp / vxlan */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 4. after flow creation application does not need to keep the
* tunnel action resources.
*/
rte_flow_tunnel_action_release(port_id, pmd_actions,
num_pmd_actions);
/**
* 5. After partially offloaded packet miss because there was no
* matching rule handle miss on group 3
*/
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);
/**
* 6. Offload NAT rule:
*/
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
rte_flow_tunnel_match(&info.tunnel, &pmd_items,
&num_pmd_items, &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 7. Release PMD items after rule creation
*/
rte_flow_tunnel_item_release(port_id,
pmd_items, num_pmd_items);
References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 12:51:06 +00:00
|
|
|
error);
|
|
|
|
}
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOTSUP));
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_flow_tunnel_item_release(uint16_t port_id,
|
|
|
|
struct rte_flow_item *items,
|
|
|
|
uint32_t num_of_items,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
|
|
|
|
const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
|
|
|
|
|
|
|
|
if (unlikely(!ops))
|
|
|
|
return -rte_errno;
|
2020-10-18 12:15:23 +00:00
|
|
|
if (likely(!!ops->tunnel_item_release)) {
|
ethdev: add tunnel offload model
rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects. The hardware model design for
high-level software objects is not trivial. Furthermore, an optimal
design is often vendor-specific.
When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.
There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor. For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.
The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.
The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.
For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.
VXLAN Code example:
Assume application needs to do inner NAT on the VXLAN packet.
The first rule in group 0:
flow create <port id> ingress group 0
pattern eth / ipv4 / udp dst is 4789 / vxlan / end
actions {pmd actions} / jump group 3 / end
The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3. In group 3 the packet will miss since there is no
flow to match and will be sent to the application. Application will
call rte_flow_get_restore_info() to get the packet outer header.
Application will insert a new rule in group 3 to match outer and inner
headers:
flow create <port id> ingress group 3
pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
udp dst 4789 / vxlan vni is 10 /
ipv4 dst is 184.1.2.3 / end
actions set_ipv4_dst 186.1.1.1 / queue index 3 / end
Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1
Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on. It's PMD
responsibility to translate these items to outer metadata.
API usage:
/**
* 1. Initiate RTE flow tunnel object
*/
const struct rte_flow_tunnel tunnel = {
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
.tun_id = 10,
}
/**
* 2. Obtain PMD tunnel actions
*
* pmd_actions is an intermediate variable application uses to
* compile actions array
*/
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
&num_pmd_actions, &error);
/**
* 3. offload the first rule
* matching on VXLAN traffic and jumps to group 3
* (implicitly decaps packet)
*/
app_actions = jump group 3
rule_items = app_items; /** eth / ipv4 / udp / vxlan */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 4. after flow creation application does not need to keep the
* tunnel action resources.
*/
rte_flow_tunnel_action_release(port_id, pmd_actions,
num_pmd_actions);
/**
* 5. After partially offloaded packet miss because there was no
* matching rule handle miss on group 3
*/
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);
/**
* 6. Offload NAT rule:
*/
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
rte_flow_tunnel_match(&info.tunnel, &pmd_items,
&num_pmd_items, &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 7. Release PMD items after rule creation
*/
rte_flow_tunnel_item_release(port_id,
pmd_items, num_pmd_items);
References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 12:51:06 +00:00
|
|
|
return flow_err(port_id,
|
2020-10-18 12:15:23 +00:00
|
|
|
ops->tunnel_item_release(dev, items,
|
|
|
|
num_of_items, error),
|
ethdev: add tunnel offload model
rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects. The hardware model design for
high-level software objects is not trivial. Furthermore, an optimal
design is often vendor-specific.
When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.
There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor. For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.
The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.
The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.
For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.
VXLAN Code example:
Assume application needs to do inner NAT on the VXLAN packet.
The first rule in group 0:
flow create <port id> ingress group 0
pattern eth / ipv4 / udp dst is 4789 / vxlan / end
actions {pmd actions} / jump group 3 / end
The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3. In group 3 the packet will miss since there is no
flow to match and will be sent to the application. Application will
call rte_flow_get_restore_info() to get the packet outer header.
Application will insert a new rule in group 3 to match outer and inner
headers:
flow create <port id> ingress group 3
pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
udp dst 4789 / vxlan vni is 10 /
ipv4 dst is 184.1.2.3 / end
actions set_ipv4_dst 186.1.1.1 / queue index 3 / end
Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1
Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on. It's PMD
responsibility to translate these items to outer metadata.
API usage:
/**
* 1. Initiate RTE flow tunnel object
*/
const struct rte_flow_tunnel tunnel = {
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
.tun_id = 10,
}
/**
* 2. Obtain PMD tunnel actions
*
* pmd_actions is an intermediate variable application uses to
* compile actions array
*/
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
&num_pmd_actions, &error);
/**
* 3. offload the first rule
* matching on VXLAN traffic and jumps to group 3
* (implicitly decaps packet)
*/
app_actions = jump group 3
rule_items = app_items; /** eth / ipv4 / udp / vxlan */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 4. after flow creation application does not need to keep the
* tunnel action resources.
*/
rte_flow_tunnel_action_release(port_id, pmd_actions,
num_pmd_actions);
/**
* 5. After partially offloaded packet miss because there was no
* matching rule handle miss on group 3
*/
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);
/**
* 6. Offload NAT rule:
*/
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
rte_flow_tunnel_match(&info.tunnel, &pmd_items,
&num_pmd_items, &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
rule_items, rule_actions, &error);
/**
* 7. Release PMD items after rule creation
*/
rte_flow_tunnel_item_release(port_id,
pmd_items, num_pmd_items);
References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 12:51:06 +00:00
|
|
|
error);
|
|
|
|
}
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, rte_strerror(ENOTSUP));
|
|
|
|
}
|