net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
- SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly
However, there are still some disadvantages:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
This commit adds the basic driver operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:40:38 +00:00
|
|
|
/* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
* Copyright (c) 2022 NVIDIA Corporation & Affiliates
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <rte_flow.h>
|
|
|
|
|
2022-02-24 13:40:41 +00:00
|
|
|
#include <mlx5_malloc.h>
|
|
|
|
#include "mlx5_defs.h"
|
net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
- SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly
However, there are still some disadvantages:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
This commit adds the basic driver operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:40:38 +00:00
|
|
|
#include "mlx5_flow.h"
|
2022-02-24 13:40:48 +00:00
|
|
|
#include "mlx5_rx.h"
|
net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
- SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly
However, there are still some disadvantages:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
This commit adds the basic driver operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:40:38 +00:00
|
|
|
|
|
|
|
#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
#include "mlx5_hws_cnt.h"
|
net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
- SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly
However, there are still some disadvantages:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
This commit adds the basic driver operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:40:38 +00:00
|
|
|
|
2022-02-24 13:40:45 +00:00
|
|
|
/* The maximum actions support in the flow. */
|
|
|
|
#define MLX5_HW_MAX_ACTS 16
|
|
|
|
|
2022-10-20 15:41:44 +00:00
|
|
|
/*
|
|
|
|
* The default ipool threshold value indicates which per_core_cache
|
|
|
|
* value to set.
|
|
|
|
*/
|
|
|
|
#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19)
|
|
|
|
/* The default min local cache size. */
|
|
|
|
#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9)
|
|
|
|
|
2022-02-24 13:40:46 +00:00
|
|
|
/* Default push burst threshold. */
|
|
|
|
#define BURST_THR 32u
|
|
|
|
|
|
|
|
/* Default queue to flush the flows. */
|
|
|
|
#define MLX5_DEFAULT_FLUSH_QUEUE 0
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
/* Maximum number of rules in control flow tables. */
|
2022-10-20 15:41:39 +00:00
|
|
|
#define MLX5_HW_CTRL_FLOW_NB_RULES (4096)
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
/* Lowest flow group usable by an application. */
|
|
|
|
#define MLX5_HW_LOWEST_USABLE_GROUP (1)
|
|
|
|
|
|
|
|
/* Maximum group index usable by user applications for transfer flows. */
|
|
|
|
#define MLX5_HW_MAX_TRANSFER_GROUP (UINT32_MAX - 1)
|
|
|
|
|
|
|
|
/* Lowest priority for HW root table. */
|
|
|
|
#define MLX5_HW_LOWEST_PRIO_ROOT 15
|
|
|
|
|
|
|
|
/* Lowest priority for HW non-root table. */
|
|
|
|
#define MLX5_HW_LOWEST_PRIO_NON_ROOT (UINT32_MAX)
|
2022-10-20 15:41:39 +00:00
|
|
|
|
|
|
|
static int flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev);
|
2022-10-20 15:41:40 +00:00
|
|
|
static int flow_hw_translate_group(struct rte_eth_dev *dev,
|
|
|
|
const struct mlx5_flow_template_table_cfg *cfg,
|
|
|
|
uint32_t group,
|
|
|
|
uint32_t *table_group,
|
|
|
|
struct rte_flow_error *error);
|
2022-10-20 15:41:39 +00:00
|
|
|
|
net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
- SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly
However, there are still some disadvantages:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
This commit adds the basic driver operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:40:38 +00:00
|
|
|
const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
|
|
|
|
|
2022-02-24 13:40:44 +00:00
|
|
|
/* DR action flags with different table. */
|
|
|
|
static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
|
|
|
|
[MLX5DR_TABLE_TYPE_MAX] = {
|
|
|
|
{
|
|
|
|
MLX5DR_ACTION_FLAG_ROOT_RX,
|
|
|
|
MLX5DR_ACTION_FLAG_ROOT_TX,
|
|
|
|
MLX5DR_ACTION_FLAG_ROOT_FDB,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_RX,
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_TX,
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_FDB,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2022-02-24 13:40:49 +00:00
|
|
|
/**
|
|
|
|
* Set rxq flag.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] enable
|
|
|
|
* Flag to enable or not.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
flow_hw_rxq_flag_set(struct rte_eth_dev *dev, bool enable)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
if ((!priv->mark_enabled && !enable) ||
|
|
|
|
(priv->mark_enabled && enable))
|
|
|
|
return;
|
|
|
|
for (i = 0; i < priv->rxqs_n; ++i) {
|
|
|
|
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i);
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/* With RXQ start/stop feature, RXQ might be stopped. */
|
|
|
|
if (!rxq_ctrl)
|
|
|
|
continue;
|
2022-02-24 13:40:49 +00:00
|
|
|
rxq_ctrl->rxq.mark = enable;
|
|
|
|
}
|
|
|
|
priv->mark_enabled = enable;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:36 +00:00
|
|
|
/**
|
|
|
|
* Set the hash fields according to the @p rss_desc information.
|
|
|
|
*
|
|
|
|
* @param[in] rss_desc
|
|
|
|
* Pointer to the mlx5_flow_rss_desc.
|
|
|
|
* @param[out] hash_fields
|
|
|
|
* Pointer to the RSS hash fields.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc,
|
|
|
|
uint64_t *hash_fields)
|
|
|
|
{
|
|
|
|
uint64_t fields = 0;
|
|
|
|
int rss_inner = 0;
|
|
|
|
uint64_t rss_types = rte_eth_rss_hf_refine(rss_desc->types);
|
|
|
|
|
|
|
|
#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
|
|
|
|
if (rss_desc->level >= 2)
|
|
|
|
rss_inner = 1;
|
|
|
|
#endif
|
|
|
|
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
|
|
|
|
if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_SRC_IPV4;
|
|
|
|
else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_DST_IPV4;
|
|
|
|
else
|
|
|
|
fields |= MLX5_IPV4_IBV_RX_HASH;
|
|
|
|
} else if (rss_types & MLX5_IPV6_LAYER_TYPES) {
|
|
|
|
if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_SRC_IPV6;
|
|
|
|
else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_DST_IPV6;
|
|
|
|
else
|
|
|
|
fields |= MLX5_IPV6_IBV_RX_HASH;
|
|
|
|
}
|
|
|
|
if (rss_types & RTE_ETH_RSS_UDP) {
|
|
|
|
if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_SRC_PORT_UDP;
|
|
|
|
else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_DST_PORT_UDP;
|
|
|
|
else
|
|
|
|
fields |= MLX5_UDP_IBV_RX_HASH;
|
|
|
|
} else if (rss_types & RTE_ETH_RSS_TCP) {
|
|
|
|
if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_SRC_PORT_TCP;
|
|
|
|
else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
|
|
|
|
fields |= IBV_RX_HASH_DST_PORT_TCP;
|
|
|
|
else
|
|
|
|
fields |= MLX5_TCP_IBV_RX_HASH;
|
|
|
|
}
|
|
|
|
if (rss_types & RTE_ETH_RSS_ESP)
|
|
|
|
fields |= IBV_RX_HASH_IPSEC_SPI;
|
|
|
|
if (rss_inner)
|
|
|
|
fields |= IBV_RX_HASH_INNER;
|
|
|
|
*hash_fields = fields;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
/**
|
|
|
|
* Generate the pattern item flags.
|
|
|
|
* Will be used for shared RSS action.
|
|
|
|
*
|
|
|
|
* @param[in] items
|
|
|
|
* Pointer to the list of items.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Item flags.
|
|
|
|
*/
|
|
|
|
static uint64_t
|
|
|
|
flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
|
|
|
|
{
|
|
|
|
uint64_t item_flags = 0;
|
|
|
|
uint64_t last_item = 0;
|
|
|
|
|
|
|
|
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
|
|
|
|
int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
|
|
|
|
int item_type = items->type;
|
|
|
|
|
|
|
|
switch (item_type) {
|
|
|
|
case RTE_FLOW_ITEM_TYPE_IPV4:
|
|
|
|
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
|
|
|
|
MLX5_FLOW_LAYER_OUTER_L3_IPV4;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_IPV6:
|
|
|
|
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
|
|
|
|
MLX5_FLOW_LAYER_OUTER_L3_IPV6;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_TCP:
|
|
|
|
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
|
|
|
|
MLX5_FLOW_LAYER_OUTER_L4_TCP;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_UDP:
|
|
|
|
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
|
|
|
|
MLX5_FLOW_LAYER_OUTER_L4_UDP;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GRE:
|
|
|
|
last_item = MLX5_FLOW_LAYER_GRE;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_NVGRE:
|
|
|
|
last_item = MLX5_FLOW_LAYER_GRE;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_VXLAN:
|
|
|
|
last_item = MLX5_FLOW_LAYER_VXLAN;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
|
|
|
|
last_item = MLX5_FLOW_LAYER_VXLAN_GPE;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GENEVE:
|
|
|
|
last_item = MLX5_FLOW_LAYER_GENEVE;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_MPLS:
|
|
|
|
last_item = MLX5_FLOW_LAYER_MPLS;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GTP:
|
|
|
|
last_item = MLX5_FLOW_LAYER_GTP;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
item_flags |= last_item;
|
|
|
|
}
|
|
|
|
return item_flags;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:47 +00:00
|
|
|
/**
|
|
|
|
* Register destination table DR jump action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] table_attr
|
|
|
|
* Pointer to the flow attributes.
|
|
|
|
* @param[in] dest_group
|
|
|
|
* The destination group ID.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Table on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct mlx5_hw_jump_action *
|
|
|
|
flow_hw_jump_action_register(struct rte_eth_dev *dev,
|
2022-10-20 15:41:40 +00:00
|
|
|
const struct mlx5_flow_template_table_cfg *cfg,
|
2022-02-24 13:40:47 +00:00
|
|
|
uint32_t dest_group,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_attr jattr = cfg->attr.flow_attr;
|
2022-02-24 13:40:47 +00:00
|
|
|
struct mlx5_flow_group *grp;
|
|
|
|
struct mlx5_flow_cb_ctx ctx = {
|
|
|
|
.dev = dev,
|
|
|
|
.error = error,
|
|
|
|
.data = &jattr,
|
|
|
|
};
|
|
|
|
struct mlx5_list_entry *ge;
|
2022-10-20 15:41:40 +00:00
|
|
|
uint32_t target_group;
|
2022-02-24 13:40:47 +00:00
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
target_group = dest_group;
|
|
|
|
if (flow_hw_translate_group(dev, cfg, dest_group, &target_group, error))
|
|
|
|
return NULL;
|
|
|
|
jattr.group = target_group;
|
|
|
|
ge = mlx5_hlist_register(priv->sh->flow_tbls, target_group, &ctx);
|
2022-02-24 13:40:47 +00:00
|
|
|
if (!ge)
|
|
|
|
return NULL;
|
|
|
|
grp = container_of(ge, struct mlx5_flow_group, entry);
|
|
|
|
return &grp->jump;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Release jump action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] jump
|
|
|
|
* Pointer to the jump action.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void
|
|
|
|
flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_flow_group *grp;
|
|
|
|
|
|
|
|
grp = container_of
|
|
|
|
(jump, struct mlx5_flow_group, jump);
|
|
|
|
mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry);
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:48 +00:00
|
|
|
/**
|
|
|
|
* Register queue/RSS action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] hws_flags
|
|
|
|
* DR action flags.
|
|
|
|
* @param[in] action
|
|
|
|
* rte flow action.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Table on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static inline struct mlx5_hrxq*
|
|
|
|
flow_hw_tir_action_register(struct rte_eth_dev *dev,
|
|
|
|
uint32_t hws_flags,
|
|
|
|
const struct rte_flow_action *action)
|
|
|
|
{
|
|
|
|
struct mlx5_flow_rss_desc rss_desc = {
|
|
|
|
.hws_flags = hws_flags,
|
|
|
|
};
|
|
|
|
struct mlx5_hrxq *hrxq;
|
|
|
|
|
|
|
|
if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
|
|
|
|
const struct rte_flow_action_queue *queue = action->conf;
|
|
|
|
|
|
|
|
rss_desc.const_q = &queue->index;
|
|
|
|
rss_desc.queue_num = 1;
|
|
|
|
} else {
|
|
|
|
const struct rte_flow_action_rss *rss = action->conf;
|
|
|
|
|
|
|
|
rss_desc.queue_num = rss->queue_num;
|
|
|
|
rss_desc.const_q = rss->queue;
|
|
|
|
memcpy(rss_desc.key,
|
|
|
|
!rss->key ? rss_hash_default_key : rss->key,
|
|
|
|
MLX5_RSS_HASH_KEY_LEN);
|
|
|
|
rss_desc.key_len = MLX5_RSS_HASH_KEY_LEN;
|
|
|
|
rss_desc.types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
|
2022-10-20 15:41:36 +00:00
|
|
|
flow_hw_hashfields_set(&rss_desc, &rss_desc.hash_fields);
|
2022-02-24 13:40:48 +00:00
|
|
|
flow_dv_action_rss_l34_hash_adjust(rss->types,
|
|
|
|
&rss_desc.hash_fields);
|
|
|
|
if (rss->level > 1) {
|
|
|
|
rss_desc.hash_fields |= IBV_RX_HASH_INNER;
|
|
|
|
rss_desc.tunnel = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
hrxq = mlx5_hrxq_get(dev, &rss_desc);
|
|
|
|
return hrxq;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:44 +00:00
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_ct_compile(struct rte_eth_dev *dev,
|
|
|
|
uint32_t queue, uint32_t idx,
|
|
|
|
struct mlx5dr_rule_action *rule_act)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_aso_ct_action *ct;
|
|
|
|
|
|
|
|
ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
|
|
|
|
if (!ct || mlx5_aso_ct_available(priv->sh, queue, ct))
|
|
|
|
return -1;
|
|
|
|
rule_act->action = priv->hws_ctpool->dr_action;
|
|
|
|
rule_act->aso_ct.offset = ct->offset;
|
|
|
|
rule_act->aso_ct.direction = ct->is_original ?
|
|
|
|
MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR :
|
|
|
|
MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:44 +00:00
|
|
|
/**
|
|
|
|
* Destroy DR actions created by action template.
|
|
|
|
*
|
|
|
|
* For DR actions created during table creation's action translate.
|
|
|
|
* Need to destroy the DR action when destroying the table.
|
|
|
|
*
|
2022-02-24 13:40:47 +00:00
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
2022-02-24 13:40:44 +00:00
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
*/
|
|
|
|
static void
|
2022-02-24 13:40:47 +00:00
|
|
|
__flow_hw_action_template_destroy(struct rte_eth_dev *dev,
|
|
|
|
struct mlx5_hw_actions *acts)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:43 +00:00
|
|
|
struct mlx5_action_construct_data *data;
|
|
|
|
|
|
|
|
while (!LIST_EMPTY(&acts->act_list)) {
|
|
|
|
data = LIST_FIRST(&acts->act_list);
|
|
|
|
LIST_REMOVE(data, next);
|
|
|
|
mlx5_ipool_free(priv->acts_ipool, data->idx);
|
|
|
|
}
|
2022-02-24 13:40:47 +00:00
|
|
|
|
|
|
|
if (acts->jump) {
|
|
|
|
struct mlx5_flow_group *grp;
|
|
|
|
|
|
|
|
grp = container_of
|
|
|
|
(acts->jump, struct mlx5_flow_group, jump);
|
|
|
|
mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry);
|
|
|
|
acts->jump = NULL;
|
|
|
|
}
|
2022-10-20 15:41:43 +00:00
|
|
|
if (acts->tir) {
|
|
|
|
mlx5_hrxq_release(dev, acts->tir->idx);
|
|
|
|
acts->tir = NULL;
|
|
|
|
}
|
|
|
|
if (acts->encap_decap) {
|
|
|
|
if (acts->encap_decap->action)
|
|
|
|
mlx5dr_action_destroy(acts->encap_decap->action);
|
|
|
|
mlx5_free(acts->encap_decap);
|
|
|
|
acts->encap_decap = NULL;
|
|
|
|
}
|
2022-10-20 15:41:38 +00:00
|
|
|
if (acts->mhdr) {
|
|
|
|
if (acts->mhdr->action)
|
|
|
|
mlx5dr_action_destroy(acts->mhdr->action);
|
|
|
|
mlx5_free(acts->mhdr);
|
|
|
|
}
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (mlx5_hws_cnt_id_valid(acts->cnt_id)) {
|
|
|
|
mlx5_hws_cnt_shared_put(priv->hws_cpool, &acts->cnt_id);
|
|
|
|
acts->cnt_id = 0;
|
|
|
|
}
|
2022-02-24 13:40:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Append dynamic action to the dynamic action list.
|
|
|
|
*
|
|
|
|
* @param[in] priv
|
|
|
|
* Pointer to the port private data structure.
|
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] type
|
|
|
|
* Action type.
|
|
|
|
* @param[in] action_src
|
|
|
|
* Offset of source rte flow action.
|
|
|
|
* @param[in] action_dst
|
|
|
|
* Offset of destination DR action.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline struct mlx5_action_construct_data *
|
|
|
|
__flow_hw_act_data_alloc(struct mlx5_priv *priv,
|
|
|
|
enum rte_flow_action_type type,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst)
|
2022-02-24 13:40:44 +00:00
|
|
|
{
|
2022-02-24 13:40:47 +00:00
|
|
|
struct mlx5_action_construct_data *act_data;
|
|
|
|
uint32_t idx = 0;
|
|
|
|
|
|
|
|
act_data = mlx5_ipool_zmalloc(priv->acts_ipool, &idx);
|
|
|
|
if (!act_data)
|
|
|
|
return NULL;
|
|
|
|
act_data->idx = idx;
|
|
|
|
act_data->type = type;
|
|
|
|
act_data->action_src = action_src;
|
|
|
|
act_data->action_dst = action_dst;
|
|
|
|
return act_data;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Append dynamic action to the dynamic action list.
|
|
|
|
*
|
|
|
|
* @param[in] priv
|
|
|
|
* Pointer to the port private data structure.
|
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] type
|
|
|
|
* Action type.
|
|
|
|
* @param[in] action_src
|
|
|
|
* Offset of source rte flow action.
|
|
|
|
* @param[in] action_dst
|
|
|
|
* Offset of destination DR action.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
__flow_hw_act_data_general_append(struct mlx5_priv *priv,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
enum rte_flow_action_type type,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst)
|
|
|
|
{ struct mlx5_action_construct_data *act_data;
|
|
|
|
|
|
|
|
act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
|
|
|
|
if (!act_data)
|
|
|
|
return -1;
|
|
|
|
LIST_INSERT_HEAD(&acts->act_list, act_data, next);
|
|
|
|
return 0;
|
2022-02-24 13:40:44 +00:00
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:51 +00:00
|
|
|
/**
|
|
|
|
* Append dynamic encap action to the dynamic action list.
|
|
|
|
*
|
|
|
|
* @param[in] priv
|
|
|
|
* Pointer to the port private data structure.
|
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] type
|
|
|
|
* Action type.
|
|
|
|
* @param[in] action_src
|
|
|
|
* Offset of source rte flow action.
|
|
|
|
* @param[in] action_dst
|
|
|
|
* Offset of destination DR action.
|
|
|
|
* @param[in] len
|
|
|
|
* Length of the data to be updated.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
__flow_hw_act_data_encap_append(struct mlx5_priv *priv,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
enum rte_flow_action_type type,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst,
|
|
|
|
uint16_t len)
|
|
|
|
{ struct mlx5_action_construct_data *act_data;
|
|
|
|
|
|
|
|
act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
|
|
|
|
if (!act_data)
|
|
|
|
return -1;
|
|
|
|
act_data->encap.len = len;
|
|
|
|
LIST_INSERT_HEAD(&acts->act_list, act_data, next);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:38 +00:00
|
|
|
static __rte_always_inline int
|
|
|
|
__flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
enum rte_flow_action_type type,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst,
|
|
|
|
uint16_t mhdr_cmds_off,
|
|
|
|
uint16_t mhdr_cmds_end,
|
|
|
|
bool shared,
|
|
|
|
struct field_modify_info *field,
|
|
|
|
struct field_modify_info *dcopy,
|
|
|
|
uint32_t *mask)
|
|
|
|
{
|
|
|
|
struct mlx5_action_construct_data *act_data;
|
|
|
|
|
|
|
|
act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
|
|
|
|
if (!act_data)
|
|
|
|
return -1;
|
|
|
|
act_data->modify_header.mhdr_cmds_off = mhdr_cmds_off;
|
|
|
|
act_data->modify_header.mhdr_cmds_end = mhdr_cmds_end;
|
|
|
|
act_data->modify_header.shared = shared;
|
|
|
|
rte_memcpy(act_data->modify_header.field, field,
|
|
|
|
sizeof(*field) * MLX5_ACT_MAX_MOD_FIELDS);
|
|
|
|
rte_memcpy(act_data->modify_header.dcopy, dcopy,
|
|
|
|
sizeof(*dcopy) * MLX5_ACT_MAX_MOD_FIELDS);
|
|
|
|
rte_memcpy(act_data->modify_header.mask, mask,
|
|
|
|
sizeof(*mask) * MLX5_ACT_MAX_MOD_FIELDS);
|
|
|
|
LIST_INSERT_HEAD(&acts->act_list, act_data, next);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
/**
|
|
|
|
* Append shared RSS action to the dynamic action list.
|
|
|
|
*
|
|
|
|
* @param[in] priv
|
|
|
|
* Pointer to the port private data structure.
|
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] type
|
|
|
|
* Action type.
|
|
|
|
* @param[in] action_src
|
|
|
|
* Offset of source rte flow action.
|
|
|
|
* @param[in] action_dst
|
|
|
|
* Offset of destination DR action.
|
|
|
|
* @param[in] idx
|
|
|
|
* Shared RSS index.
|
|
|
|
* @param[in] rss
|
|
|
|
* Pointer to the shared RSS info.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
__flow_hw_act_data_shared_rss_append(struct mlx5_priv *priv,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
enum rte_flow_action_type type,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst,
|
|
|
|
uint32_t idx,
|
|
|
|
struct mlx5_shared_action_rss *rss)
|
|
|
|
{ struct mlx5_action_construct_data *act_data;
|
|
|
|
|
|
|
|
act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
|
|
|
|
if (!act_data)
|
|
|
|
return -1;
|
|
|
|
act_data->shared_rss.level = rss->origin.level;
|
|
|
|
act_data->shared_rss.types = !rss->origin.types ? RTE_ETH_RSS_IP :
|
|
|
|
rss->origin.types;
|
|
|
|
act_data->shared_rss.idx = idx;
|
|
|
|
LIST_INSERT_HEAD(&acts->act_list, act_data, next);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
/**
|
|
|
|
* Append shared counter action to the dynamic action list.
|
|
|
|
*
|
|
|
|
* @param[in] priv
|
|
|
|
* Pointer to the port private data structure.
|
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] type
|
|
|
|
* Action type.
|
|
|
|
* @param[in] action_src
|
|
|
|
* Offset of source rte flow action.
|
|
|
|
* @param[in] action_dst
|
|
|
|
* Offset of destination DR action.
|
|
|
|
* @param[in] cnt_id
|
|
|
|
* Shared counter id.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
__flow_hw_act_data_shared_cnt_append(struct mlx5_priv *priv,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
enum rte_flow_action_type type,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst,
|
|
|
|
cnt_id_t cnt_id)
|
|
|
|
{ struct mlx5_action_construct_data *act_data;
|
|
|
|
|
|
|
|
act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
|
|
|
|
if (!act_data)
|
|
|
|
return -1;
|
|
|
|
act_data->type = type;
|
|
|
|
act_data->shared_counter.id = cnt_id;
|
|
|
|
LIST_INSERT_HEAD(&acts->act_list, act_data, next);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
/**
|
|
|
|
* Translate shared indirect action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev data structure.
|
|
|
|
* @param[in] action
|
|
|
|
* Pointer to the shared indirect rte_flow action.
|
|
|
|
* @param[in] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] action_src
|
|
|
|
* Offset of source rte flow action.
|
|
|
|
* @param[in] action_dst
|
|
|
|
* Offset of destination DR action.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_shared_action_translate(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_action *action,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
uint16_t action_src,
|
|
|
|
uint16_t action_dst)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_shared_action_rss *shared_rss;
|
|
|
|
uint32_t act_idx = (uint32_t)(uintptr_t)action->conf;
|
|
|
|
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
|
|
|
|
uint32_t idx = act_idx &
|
|
|
|
((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1);
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_RSS:
|
|
|
|
shared_rss = mlx5_ipool_get
|
|
|
|
(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx);
|
|
|
|
if (!shared_rss || __flow_hw_act_data_shared_rss_append
|
|
|
|
(priv, acts,
|
|
|
|
(enum rte_flow_action_type)MLX5_RTE_FLOW_ACTION_TYPE_RSS,
|
|
|
|
action_src, action_dst, idx, shared_rss))
|
|
|
|
return -1;
|
|
|
|
break;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_COUNT:
|
|
|
|
if (__flow_hw_act_data_shared_cnt_append(priv, acts,
|
|
|
|
(enum rte_flow_action_type)
|
|
|
|
MLX5_RTE_FLOW_ACTION_TYPE_COUNT,
|
|
|
|
action_src, action_dst, act_idx))
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_CT:
|
|
|
|
if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE,
|
|
|
|
idx, &acts->rule_acts[action_dst]))
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-02-24 13:40:50 +00:00
|
|
|
default:
|
|
|
|
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:38 +00:00
|
|
|
static __rte_always_inline bool
|
|
|
|
flow_hw_action_modify_field_is_shared(const struct rte_flow_action *action,
|
|
|
|
const struct rte_flow_action *mask)
|
|
|
|
{
|
|
|
|
const struct rte_flow_action_modify_field *v = action->conf;
|
|
|
|
const struct rte_flow_action_modify_field *m = mask->conf;
|
|
|
|
|
|
|
|
if (v->src.field == RTE_FLOW_FIELD_VALUE) {
|
|
|
|
uint32_t j;
|
|
|
|
|
|
|
|
if (m == NULL)
|
|
|
|
return false;
|
|
|
|
for (j = 0; j < RTE_DIM(m->src.value); ++j) {
|
|
|
|
/*
|
|
|
|
* Immediate value is considered to be masked
|
|
|
|
* (and thus shared by all flow rules), if mask
|
|
|
|
* is non-zero. Partial mask over immediate value
|
|
|
|
* is not allowed.
|
|
|
|
*/
|
|
|
|
if (m->src.value[j])
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (v->src.field == RTE_FLOW_FIELD_POINTER)
|
|
|
|
return m->src.pvalue != NULL;
|
|
|
|
/*
|
|
|
|
* Source field types other than VALUE and
|
|
|
|
* POINTER are always shared.
|
|
|
|
*/
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __rte_always_inline bool
|
|
|
|
flow_hw_should_insert_nop(const struct mlx5_hw_modify_header_action *mhdr,
|
|
|
|
const struct mlx5_modification_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct mlx5_modification_cmd last_cmd = { { 0 } };
|
|
|
|
struct mlx5_modification_cmd new_cmd = { { 0 } };
|
|
|
|
const uint32_t cmds_num = mhdr->mhdr_cmds_num;
|
|
|
|
unsigned int last_type;
|
|
|
|
bool should_insert = false;
|
|
|
|
|
|
|
|
if (cmds_num == 0)
|
|
|
|
return false;
|
|
|
|
last_cmd = *(&mhdr->mhdr_cmds[cmds_num - 1]);
|
|
|
|
last_cmd.data0 = rte_be_to_cpu_32(last_cmd.data0);
|
|
|
|
last_cmd.data1 = rte_be_to_cpu_32(last_cmd.data1);
|
|
|
|
last_type = last_cmd.action_type;
|
|
|
|
new_cmd = *cmd;
|
|
|
|
new_cmd.data0 = rte_be_to_cpu_32(new_cmd.data0);
|
|
|
|
new_cmd.data1 = rte_be_to_cpu_32(new_cmd.data1);
|
|
|
|
switch (new_cmd.action_type) {
|
|
|
|
case MLX5_MODIFICATION_TYPE_SET:
|
|
|
|
case MLX5_MODIFICATION_TYPE_ADD:
|
|
|
|
if (last_type == MLX5_MODIFICATION_TYPE_SET ||
|
|
|
|
last_type == MLX5_MODIFICATION_TYPE_ADD)
|
|
|
|
should_insert = new_cmd.field == last_cmd.field;
|
|
|
|
else if (last_type == MLX5_MODIFICATION_TYPE_COPY)
|
|
|
|
should_insert = new_cmd.field == last_cmd.dst_field;
|
|
|
|
else if (last_type == MLX5_MODIFICATION_TYPE_NOP)
|
|
|
|
should_insert = false;
|
|
|
|
else
|
|
|
|
MLX5_ASSERT(false); /* Other types are not supported. */
|
|
|
|
break;
|
|
|
|
case MLX5_MODIFICATION_TYPE_COPY:
|
|
|
|
if (last_type == MLX5_MODIFICATION_TYPE_SET ||
|
|
|
|
last_type == MLX5_MODIFICATION_TYPE_ADD)
|
|
|
|
should_insert = (new_cmd.field == last_cmd.field ||
|
|
|
|
new_cmd.dst_field == last_cmd.field);
|
|
|
|
else if (last_type == MLX5_MODIFICATION_TYPE_COPY)
|
|
|
|
should_insert = (new_cmd.field == last_cmd.dst_field ||
|
|
|
|
new_cmd.dst_field == last_cmd.dst_field);
|
|
|
|
else if (last_type == MLX5_MODIFICATION_TYPE_NOP)
|
|
|
|
should_insert = false;
|
|
|
|
else
|
|
|
|
MLX5_ASSERT(false); /* Other types are not supported. */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Other action types should be rejected on AT validation. */
|
|
|
|
MLX5_ASSERT(false);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return should_insert;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_mhdr_cmd_nop_append(struct mlx5_hw_modify_header_action *mhdr)
|
|
|
|
{
|
|
|
|
struct mlx5_modification_cmd *nop;
|
|
|
|
uint32_t num = mhdr->mhdr_cmds_num;
|
|
|
|
|
|
|
|
if (num + 1 >= MLX5_MHDR_MAX_CMD)
|
|
|
|
return -ENOMEM;
|
|
|
|
nop = mhdr->mhdr_cmds + num;
|
|
|
|
nop->data0 = 0;
|
|
|
|
nop->action_type = MLX5_MODIFICATION_TYPE_NOP;
|
|
|
|
nop->data0 = rte_cpu_to_be_32(nop->data0);
|
|
|
|
nop->data1 = 0;
|
|
|
|
mhdr->mhdr_cmds_num = num + 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_mhdr_cmd_append(struct mlx5_hw_modify_header_action *mhdr,
|
|
|
|
struct mlx5_modification_cmd *cmd)
|
|
|
|
{
|
|
|
|
uint32_t num = mhdr->mhdr_cmds_num;
|
|
|
|
|
|
|
|
if (num + 1 >= MLX5_MHDR_MAX_CMD)
|
|
|
|
return -ENOMEM;
|
|
|
|
mhdr->mhdr_cmds[num] = *cmd;
|
|
|
|
mhdr->mhdr_cmds_num = num + 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_converted_mhdr_cmds_append(struct mlx5_hw_modify_header_action *mhdr,
|
|
|
|
struct mlx5_flow_dv_modify_hdr_resource *resource)
|
|
|
|
{
|
|
|
|
uint32_t idx;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
for (idx = 0; idx < resource->actions_num; ++idx) {
|
|
|
|
struct mlx5_modification_cmd *src = &resource->actions[idx];
|
|
|
|
|
|
|
|
if (flow_hw_should_insert_nop(mhdr, src)) {
|
|
|
|
ret = flow_hw_mhdr_cmd_nop_append(mhdr);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
ret = flow_hw_mhdr_cmd_append(mhdr, src);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __rte_always_inline void
|
|
|
|
flow_hw_modify_field_init(struct mlx5_hw_modify_header_action *mhdr,
|
|
|
|
struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
memset(mhdr, 0, sizeof(*mhdr));
|
|
|
|
/* Modify header action without any commands is shared by default. */
|
|
|
|
mhdr->shared = true;
|
|
|
|
mhdr->pos = at->mhdr_off;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_modify_field_compile(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_attr *attr,
|
|
|
|
const struct rte_flow_action *action_start, /* Start of AT actions. */
|
|
|
|
const struct rte_flow_action *action, /* Current action from AT. */
|
|
|
|
const struct rte_flow_action *action_mask, /* Current mask from AT. */
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
struct mlx5_hw_modify_header_action *mhdr,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
const struct rte_flow_action_modify_field *conf = action->conf;
|
|
|
|
union {
|
|
|
|
struct mlx5_flow_dv_modify_hdr_resource resource;
|
|
|
|
uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) +
|
|
|
|
sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD];
|
|
|
|
} dummy;
|
|
|
|
struct mlx5_flow_dv_modify_hdr_resource *resource;
|
|
|
|
struct rte_flow_item item = {
|
|
|
|
.spec = NULL,
|
|
|
|
.mask = NULL
|
|
|
|
};
|
|
|
|
struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = {
|
|
|
|
{0, 0, MLX5_MODI_OUT_NONE} };
|
|
|
|
struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = {
|
|
|
|
{0, 0, MLX5_MODI_OUT_NONE} };
|
|
|
|
uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 };
|
|
|
|
uint32_t type, value = 0;
|
|
|
|
uint16_t cmds_start, cmds_end;
|
|
|
|
bool shared;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Modify header action is shared if previous modify_field actions
|
|
|
|
* are shared and currently compiled action is shared.
|
|
|
|
*/
|
|
|
|
shared = flow_hw_action_modify_field_is_shared(action, action_mask);
|
|
|
|
mhdr->shared &= shared;
|
|
|
|
if (conf->src.field == RTE_FLOW_FIELD_POINTER ||
|
|
|
|
conf->src.field == RTE_FLOW_FIELD_VALUE) {
|
|
|
|
type = conf->operation == RTE_FLOW_MODIFY_SET ? MLX5_MODIFICATION_TYPE_SET :
|
|
|
|
MLX5_MODIFICATION_TYPE_ADD;
|
|
|
|
/* For SET/ADD fill the destination field (field) first. */
|
|
|
|
mlx5_flow_field_id_to_modify_info(&conf->dst, field, mask,
|
|
|
|
conf->width, dev,
|
|
|
|
attr, error);
|
|
|
|
item.spec = conf->src.field == RTE_FLOW_FIELD_POINTER ?
|
|
|
|
(void *)(uintptr_t)conf->src.pvalue :
|
|
|
|
(void *)(uintptr_t)&conf->src.value;
|
|
|
|
if (conf->dst.field == RTE_FLOW_FIELD_META ||
|
2022-10-20 15:41:40 +00:00
|
|
|
conf->dst.field == RTE_FLOW_FIELD_TAG ||
|
|
|
|
conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
|
2022-10-20 15:41:38 +00:00
|
|
|
value = *(const unaligned_uint32_t *)item.spec;
|
|
|
|
value = rte_cpu_to_be_32(value);
|
|
|
|
item.spec = &value;
|
|
|
|
} else if (conf->dst.field == RTE_FLOW_FIELD_GTP_PSC_QFI) {
|
|
|
|
/*
|
|
|
|
* QFI is passed as an uint8_t integer, but it is accessed through
|
|
|
|
* a 2nd least significant byte of a 32-bit field in modify header command.
|
|
|
|
*/
|
|
|
|
value = *(const uint8_t *)item.spec;
|
|
|
|
value = rte_cpu_to_be_32(value << 8);
|
|
|
|
item.spec = &value;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
type = MLX5_MODIFICATION_TYPE_COPY;
|
|
|
|
/* For COPY fill the destination field (dcopy) without mask. */
|
|
|
|
mlx5_flow_field_id_to_modify_info(&conf->dst, dcopy, NULL,
|
|
|
|
conf->width, dev,
|
|
|
|
attr, error);
|
|
|
|
/* Then construct the source field (field) with mask. */
|
|
|
|
mlx5_flow_field_id_to_modify_info(&conf->src, field, mask,
|
|
|
|
conf->width, dev,
|
|
|
|
attr, error);
|
|
|
|
}
|
|
|
|
item.mask = &mask;
|
|
|
|
memset(&dummy, 0, sizeof(dummy));
|
|
|
|
resource = &dummy.resource;
|
|
|
|
ret = flow_dv_convert_modify_action(&item, field, dcopy, resource, type, error);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
MLX5_ASSERT(resource->actions_num > 0);
|
|
|
|
/*
|
|
|
|
* If previous modify field action collide with this one, then insert NOP command.
|
|
|
|
* This NOP command will not be a part of action's command range used to update commands
|
|
|
|
* on rule creation.
|
|
|
|
*/
|
|
|
|
if (flow_hw_should_insert_nop(mhdr, &resource->actions[0])) {
|
|
|
|
ret = flow_hw_mhdr_cmd_nop_append(mhdr);
|
|
|
|
if (ret)
|
|
|
|
return rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "too many modify field operations specified");
|
|
|
|
}
|
|
|
|
cmds_start = mhdr->mhdr_cmds_num;
|
|
|
|
ret = flow_hw_converted_mhdr_cmds_append(mhdr, resource);
|
|
|
|
if (ret)
|
|
|
|
return rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "too many modify field operations specified");
|
|
|
|
|
|
|
|
cmds_end = mhdr->mhdr_cmds_num;
|
|
|
|
if (shared)
|
|
|
|
return 0;
|
|
|
|
ret = __flow_hw_act_data_hdr_modify_append(priv, acts, RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
action - action_start, mhdr->pos,
|
|
|
|
cmds_start, cmds_end, shared,
|
|
|
|
field, dcopy, mask);
|
|
|
|
if (ret)
|
|
|
|
return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "not enough memory to store modify field metadata");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
static int
|
|
|
|
flow_hw_represented_port_compile(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_attr *attr,
|
|
|
|
const struct rte_flow_action *action_start,
|
|
|
|
const struct rte_flow_action *action,
|
|
|
|
const struct rte_flow_action *action_mask,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
uint16_t action_dst,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
const struct rte_flow_action_ethdev *v = action->conf;
|
|
|
|
const struct rte_flow_action_ethdev *m = action_mask->conf;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!attr->group)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
|
|
|
|
"represented_port action cannot"
|
|
|
|
" be used on group 0");
|
|
|
|
if (!attr->transfer)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER,
|
|
|
|
NULL,
|
|
|
|
"represented_port action requires"
|
|
|
|
" transfer attribute");
|
|
|
|
if (attr->ingress || attr->egress)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
|
|
|
|
"represented_port action cannot"
|
|
|
|
" be used with direction attributes");
|
|
|
|
if (!priv->master)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"represented_port acton must"
|
|
|
|
" be used on proxy port");
|
|
|
|
if (m && !!m->port_id) {
|
|
|
|
struct mlx5_priv *port_priv;
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
if (!v)
|
|
|
|
return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action, "port index was not provided");
|
2022-10-20 15:41:39 +00:00
|
|
|
port_priv = mlx5_port_to_eswitch_info(v->port_id, false);
|
|
|
|
if (port_priv == NULL)
|
|
|
|
return rte_flow_error_set
|
|
|
|
(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"port does not exist or unable to"
|
|
|
|
" obtain E-Switch info for port");
|
|
|
|
MLX5_ASSERT(priv->hw_vport != NULL);
|
|
|
|
if (priv->hw_vport[v->port_id]) {
|
|
|
|
acts->rule_acts[action_dst].action =
|
|
|
|
priv->hw_vport[v->port_id];
|
|
|
|
} else {
|
|
|
|
return rte_flow_error_set
|
|
|
|
(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"cannot use represented_port action"
|
|
|
|
" with this port");
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ret = __flow_hw_act_data_general_append
|
|
|
|
(priv, acts, action->type,
|
|
|
|
action - action_start, action_dst);
|
|
|
|
if (ret)
|
|
|
|
return rte_flow_error_set
|
|
|
|
(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"not enough memory to store"
|
|
|
|
" vport action");
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:41 +00:00
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_meter_compile(struct rte_eth_dev *dev,
|
|
|
|
const struct mlx5_flow_template_table_cfg *cfg,
|
2022-10-20 15:41:43 +00:00
|
|
|
uint16_t aso_mtr_pos,
|
|
|
|
uint16_t jump_pos,
|
|
|
|
const struct rte_flow_action *action,
|
|
|
|
struct mlx5_hw_actions *acts,
|
2022-10-20 15:41:41 +00:00
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_aso_mtr *aso_mtr;
|
|
|
|
const struct rte_flow_action_meter *meter = action->conf;
|
|
|
|
uint32_t group = cfg->attr.flow_attr.group;
|
|
|
|
|
|
|
|
aso_mtr = mlx5_aso_meter_by_idx(priv, meter->mtr_id);
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[aso_mtr_pos].action = priv->mtr_bulk.action;
|
|
|
|
acts->rule_acts[aso_mtr_pos].aso_meter.offset = aso_mtr->offset;
|
|
|
|
acts->jump = flow_hw_jump_action_register
|
2022-10-20 15:41:41 +00:00
|
|
|
(dev, cfg, aso_mtr->fm.group, error);
|
2022-10-20 15:41:43 +00:00
|
|
|
if (!acts->jump)
|
2022-10-20 15:41:41 +00:00
|
|
|
return -ENOMEM;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[jump_pos].action = (!!group) ?
|
2022-10-20 15:41:41 +00:00
|
|
|
acts->jump->hws_action :
|
|
|
|
acts->jump->root_action;
|
2022-10-20 15:41:43 +00:00
|
|
|
if (mlx5_aso_mtr_wait(priv->sh, aso_mtr))
|
2022-10-20 15:41:41 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_cnt_compile(struct rte_eth_dev *dev, uint32_t start_pos,
|
|
|
|
struct mlx5_hw_actions *acts)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
uint32_t pos = start_pos;
|
|
|
|
cnt_id_t cnt_id;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = mlx5_hws_cnt_shared_get(priv->hws_cpool, &cnt_id);
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
ret = mlx5_hws_cnt_pool_get_action_offset
|
|
|
|
(priv->hws_cpool,
|
|
|
|
cnt_id,
|
|
|
|
&acts->rule_acts[pos].action,
|
|
|
|
&acts->rule_acts[pos].counter.offset);
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
acts->cnt_id = cnt_id;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:44 +00:00
|
|
|
/**
|
|
|
|
* Translate rte_flow actions to DR action.
|
|
|
|
*
|
|
|
|
* As the action template has already indicated the actions. Translate
|
|
|
|
* the rte_flow actions to DR action if possbile. So in flow create
|
|
|
|
* stage we will save cycles from handing the actions' organizing.
|
|
|
|
* For the actions with limited information, need to add these to a
|
|
|
|
* list.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
2022-10-20 15:41:40 +00:00
|
|
|
* @param[in] cfg
|
|
|
|
* Pointer to the table configuration.
|
2022-02-24 13:40:44 +00:00
|
|
|
* @param[in] item_templates
|
|
|
|
* Item template array to be binded to the table.
|
|
|
|
* @param[in/out] acts
|
|
|
|
* Pointer to the template HW steering DR actions.
|
|
|
|
* @param[in] at
|
|
|
|
* Action template.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Table on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
2022-10-20 15:41:43 +00:00
|
|
|
__flow_hw_actions_translate(struct rte_eth_dev *dev,
|
|
|
|
const struct mlx5_flow_template_table_cfg *cfg,
|
|
|
|
struct mlx5_hw_actions *acts,
|
|
|
|
struct rte_flow_actions_template *at,
|
|
|
|
struct rte_flow_error *error)
|
2022-02-24 13:40:44 +00:00
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:40 +00:00
|
|
|
const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
|
2022-02-24 13:40:44 +00:00
|
|
|
const struct rte_flow_attr *attr = &table_attr->flow_attr;
|
|
|
|
struct rte_flow_action *actions = at->actions;
|
2022-02-24 13:40:47 +00:00
|
|
|
struct rte_flow_action *action_start = actions;
|
2022-02-24 13:40:44 +00:00
|
|
|
struct rte_flow_action *masks = at->masks;
|
2022-02-24 13:40:51 +00:00
|
|
|
enum mlx5dr_action_reformat_type refmt_type = 0;
|
|
|
|
const struct rte_flow_action_raw_encap *raw_encap_data;
|
|
|
|
const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL;
|
2022-10-20 15:41:43 +00:00
|
|
|
uint16_t reformat_src = 0;
|
2022-10-20 15:41:37 +00:00
|
|
|
uint8_t *encap_data = NULL, *encap_data_m = NULL;
|
2022-02-24 13:40:51 +00:00
|
|
|
size_t data_size = 0;
|
2022-10-20 15:41:38 +00:00
|
|
|
struct mlx5_hw_modify_header_action mhdr = { 0 };
|
2022-02-24 13:40:44 +00:00
|
|
|
bool actions_end = false;
|
2022-10-20 15:41:43 +00:00
|
|
|
uint32_t type;
|
|
|
|
bool reformat_used = false;
|
|
|
|
uint16_t action_pos;
|
|
|
|
uint16_t jump_pos;
|
2022-10-20 15:41:44 +00:00
|
|
|
uint32_t ct_idx;
|
2022-02-24 13:40:47 +00:00
|
|
|
int err;
|
2022-02-24 13:40:44 +00:00
|
|
|
|
2022-10-20 15:41:38 +00:00
|
|
|
flow_hw_modify_field_init(&mhdr, at);
|
2022-02-24 13:40:44 +00:00
|
|
|
if (attr->transfer)
|
|
|
|
type = MLX5DR_TABLE_TYPE_FDB;
|
|
|
|
else if (attr->egress)
|
|
|
|
type = MLX5DR_TABLE_TYPE_NIC_TX;
|
|
|
|
else
|
|
|
|
type = MLX5DR_TABLE_TYPE_NIC_RX;
|
2022-10-20 15:41:43 +00:00
|
|
|
for (; !actions_end; actions++, masks++) {
|
2022-02-24 13:40:44 +00:00
|
|
|
switch (actions->type) {
|
|
|
|
case RTE_FLOW_ACTION_TYPE_INDIRECT:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
2022-02-24 13:40:50 +00:00
|
|
|
if (!attr->group) {
|
|
|
|
DRV_LOG(ERR, "Indirect action is not supported in root table.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (actions->conf && masks->conf) {
|
|
|
|
if (flow_hw_shared_action_translate
|
2022-10-20 15:41:43 +00:00
|
|
|
(dev, actions, acts, actions - action_start, action_pos))
|
2022-02-24 13:40:50 +00:00
|
|
|
goto err;
|
|
|
|
} else if (__flow_hw_act_data_general_append
|
|
|
|
(priv, acts, actions->type,
|
2022-10-20 15:41:43 +00:00
|
|
|
actions - action_start, action_pos)){
|
2022-02-24 13:40:50 +00:00
|
|
|
goto err;
|
|
|
|
}
|
2022-02-24 13:40:44 +00:00
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VOID:
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_DROP:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
|
|
|
acts->rule_acts[action_pos].action =
|
2022-10-20 15:41:39 +00:00
|
|
|
priv->hw_drop[!!attr->group];
|
2022-02-24 13:40:47 +00:00
|
|
|
break;
|
2022-02-24 13:40:49 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_MARK:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
2022-02-24 13:40:49 +00:00
|
|
|
acts->mark = true;
|
2022-10-20 15:41:43 +00:00
|
|
|
if (masks->conf &&
|
|
|
|
((const struct rte_flow_action_mark *)
|
|
|
|
masks->conf)->id)
|
|
|
|
acts->rule_acts[action_pos].tag.value =
|
2022-02-24 13:40:49 +00:00
|
|
|
mlx5_flow_mark_set
|
|
|
|
(((const struct rte_flow_action_mark *)
|
2022-10-20 15:41:43 +00:00
|
|
|
(actions->conf))->id);
|
2022-02-24 13:40:49 +00:00
|
|
|
else if (__flow_hw_act_data_general_append(priv, acts,
|
2022-10-20 15:41:43 +00:00
|
|
|
actions->type, actions - action_start, action_pos))
|
2022-02-24 13:40:49 +00:00
|
|
|
goto err;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[action_pos].action =
|
2022-02-24 13:40:49 +00:00
|
|
|
priv->hw_tag[!!attr->group];
|
|
|
|
flow_hw_rxq_flag_set(dev, true);
|
|
|
|
break;
|
2022-02-24 13:40:47 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_JUMP:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
|
|
|
if (masks->conf &&
|
|
|
|
((const struct rte_flow_action_jump *)
|
|
|
|
masks->conf)->group) {
|
2022-02-24 13:40:47 +00:00
|
|
|
uint32_t jump_group =
|
|
|
|
((const struct rte_flow_action_jump *)
|
|
|
|
actions->conf)->group;
|
|
|
|
acts->jump = flow_hw_jump_action_register
|
2022-10-20 15:41:40 +00:00
|
|
|
(dev, cfg, jump_group, error);
|
2022-02-24 13:40:47 +00:00
|
|
|
if (!acts->jump)
|
|
|
|
goto err;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[action_pos].action = (!!attr->group) ?
|
2022-02-24 13:40:47 +00:00
|
|
|
acts->jump->hws_action :
|
|
|
|
acts->jump->root_action;
|
|
|
|
} else if (__flow_hw_act_data_general_append
|
|
|
|
(priv, acts, actions->type,
|
2022-10-20 15:41:43 +00:00
|
|
|
actions - action_start, action_pos)){
|
2022-02-24 13:40:47 +00:00
|
|
|
goto err;
|
|
|
|
}
|
2022-02-24 13:40:44 +00:00
|
|
|
break;
|
2022-02-24 13:40:48 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_QUEUE:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
|
|
|
if (masks->conf &&
|
|
|
|
((const struct rte_flow_action_queue *)
|
|
|
|
masks->conf)->index) {
|
2022-02-24 13:40:48 +00:00
|
|
|
acts->tir = flow_hw_tir_action_register
|
|
|
|
(dev,
|
|
|
|
mlx5_hw_act_flag[!!attr->group][type],
|
|
|
|
actions);
|
|
|
|
if (!acts->tir)
|
|
|
|
goto err;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[action_pos].action =
|
2022-02-24 13:40:48 +00:00
|
|
|
acts->tir->action;
|
|
|
|
} else if (__flow_hw_act_data_general_append
|
|
|
|
(priv, acts, actions->type,
|
2022-10-20 15:41:43 +00:00
|
|
|
actions - action_start, action_pos)) {
|
2022-02-24 13:40:48 +00:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RSS:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
|
|
|
if (actions->conf && masks->conf) {
|
2022-02-24 13:40:48 +00:00
|
|
|
acts->tir = flow_hw_tir_action_register
|
|
|
|
(dev,
|
|
|
|
mlx5_hw_act_flag[!!attr->group][type],
|
|
|
|
actions);
|
|
|
|
if (!acts->tir)
|
|
|
|
goto err;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[action_pos].action =
|
2022-02-24 13:40:48 +00:00
|
|
|
acts->tir->action;
|
|
|
|
} else if (__flow_hw_act_data_general_append
|
|
|
|
(priv, acts, actions->type,
|
2022-10-20 15:41:43 +00:00
|
|
|
actions - action_start, action_pos)) {
|
2022-02-24 13:40:48 +00:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
break;
|
2022-02-24 13:40:51 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
|
2022-10-20 15:41:43 +00:00
|
|
|
MLX5_ASSERT(!reformat_used);
|
2022-02-24 13:40:51 +00:00
|
|
|
enc_item = ((const struct rte_flow_action_vxlan_encap *)
|
|
|
|
actions->conf)->definition;
|
2022-10-20 15:41:37 +00:00
|
|
|
if (masks->conf)
|
|
|
|
enc_item_m = ((const struct rte_flow_action_vxlan_encap *)
|
|
|
|
masks->conf)->definition;
|
2022-10-20 15:41:43 +00:00
|
|
|
reformat_used = true;
|
2022-02-24 13:40:51 +00:00
|
|
|
reformat_src = actions - action_start;
|
|
|
|
refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
|
2022-10-20 15:41:43 +00:00
|
|
|
MLX5_ASSERT(!reformat_used);
|
2022-02-24 13:40:51 +00:00
|
|
|
enc_item = ((const struct rte_flow_action_nvgre_encap *)
|
|
|
|
actions->conf)->definition;
|
2022-10-20 15:41:37 +00:00
|
|
|
if (masks->conf)
|
|
|
|
enc_item_m = ((const struct rte_flow_action_nvgre_encap *)
|
|
|
|
masks->conf)->definition;
|
2022-10-20 15:41:43 +00:00
|
|
|
reformat_used = true;
|
2022-02-24 13:40:51 +00:00
|
|
|
reformat_src = actions - action_start;
|
|
|
|
refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
|
2022-10-20 15:41:43 +00:00
|
|
|
MLX5_ASSERT(!reformat_used);
|
|
|
|
reformat_used = true;
|
2022-02-24 13:40:51 +00:00
|
|
|
refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
|
2022-10-20 15:41:37 +00:00
|
|
|
raw_encap_data =
|
|
|
|
(const struct rte_flow_action_raw_encap *)
|
|
|
|
masks->conf;
|
|
|
|
if (raw_encap_data)
|
|
|
|
encap_data_m = raw_encap_data->data;
|
2022-02-24 13:40:51 +00:00
|
|
|
raw_encap_data =
|
|
|
|
(const struct rte_flow_action_raw_encap *)
|
|
|
|
actions->conf;
|
|
|
|
encap_data = raw_encap_data->data;
|
|
|
|
data_size = raw_encap_data->size;
|
2022-10-20 15:41:43 +00:00
|
|
|
if (reformat_used) {
|
2022-02-24 13:40:51 +00:00
|
|
|
refmt_type = data_size <
|
|
|
|
MLX5_ENCAPSULATION_DECISION_SIZE ?
|
|
|
|
MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2 :
|
|
|
|
MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3;
|
|
|
|
} else {
|
2022-10-20 15:41:43 +00:00
|
|
|
reformat_used = true;
|
2022-02-24 13:40:51 +00:00
|
|
|
refmt_type =
|
|
|
|
MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2;
|
|
|
|
}
|
|
|
|
reformat_src = actions - action_start;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
|
2022-10-20 15:41:43 +00:00
|
|
|
reformat_used = true;
|
2022-02-24 13:40:51 +00:00
|
|
|
refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2;
|
|
|
|
break;
|
2022-10-19 18:40:07 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL:
|
|
|
|
DRV_LOG(ERR, "send to kernel action is not supported in HW steering.");
|
|
|
|
goto err;
|
2022-10-20 15:41:38 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
|
|
|
|
err = flow_hw_modify_field_compile(dev, attr, action_start,
|
|
|
|
actions, masks, acts, &mhdr,
|
|
|
|
error);
|
|
|
|
if (err)
|
|
|
|
goto err;
|
2022-10-20 15:41:40 +00:00
|
|
|
/*
|
|
|
|
* Adjust the action source position for the following.
|
|
|
|
* ... / MODIFY_FIELD: rx_cpy_pos / (QUEUE|RSS) / ...
|
|
|
|
* The next action will be Q/RSS, there will not be
|
|
|
|
* another adjustment and the real source position of
|
|
|
|
* the following actions will be decreased by 1.
|
|
|
|
* No change of the total actions in the new template.
|
|
|
|
*/
|
|
|
|
if ((actions - action_start) == at->rx_cpy_pos)
|
|
|
|
action_start += 1;
|
2022-10-20 15:41:38 +00:00
|
|
|
break;
|
2022-10-20 15:41:39 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
2022-10-20 15:41:39 +00:00
|
|
|
if (flow_hw_represented_port_compile
|
|
|
|
(dev, attr, action_start, actions,
|
2022-10-20 15:41:43 +00:00
|
|
|
masks, acts, action_pos, error))
|
2022-10-20 15:41:39 +00:00
|
|
|
goto err;
|
|
|
|
break;
|
2022-10-20 15:41:41 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_METER:
|
2022-10-20 15:41:43 +00:00
|
|
|
/*
|
|
|
|
* METER action is compiled to 2 DR actions - ASO_METER and FT.
|
|
|
|
* Calculated DR offset is stored only for ASO_METER and FT
|
|
|
|
* is assumed to be the next action.
|
|
|
|
*/
|
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
|
|
|
jump_pos = action_pos + 1;
|
2022-10-20 15:41:41 +00:00
|
|
|
if (actions->conf && masks->conf &&
|
|
|
|
((const struct rte_flow_action_meter *)
|
|
|
|
masks->conf)->mtr_id) {
|
|
|
|
err = flow_hw_meter_compile(dev, cfg,
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos, jump_pos, actions, acts, error);
|
2022-10-20 15:41:41 +00:00
|
|
|
if (err)
|
|
|
|
goto err;
|
|
|
|
} else if (__flow_hw_act_data_general_append(priv, acts,
|
|
|
|
actions->type,
|
|
|
|
actions - action_start,
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos))
|
2022-10-20 15:41:41 +00:00
|
|
|
goto err;
|
|
|
|
break;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_COUNT:
|
2022-10-20 15:41:43 +00:00
|
|
|
action_pos = at->actions_off[actions - action_start];
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (masks->conf &&
|
|
|
|
((const struct rte_flow_action_count *)
|
|
|
|
masks->conf)->id) {
|
2022-10-20 15:41:43 +00:00
|
|
|
err = flow_hw_cnt_compile(dev, action_pos, acts);
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (err)
|
|
|
|
goto err;
|
|
|
|
} else if (__flow_hw_act_data_general_append
|
|
|
|
(priv, acts, actions->type,
|
2022-10-20 15:41:43 +00:00
|
|
|
actions - action_start, action_pos)) {
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_CONNTRACK:
|
|
|
|
action_pos = at->actions_off[actions - at->actions];
|
|
|
|
if (masks->conf) {
|
|
|
|
ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
|
|
|
|
((uint32_t)(uintptr_t)actions->conf);
|
|
|
|
if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE, ct_idx,
|
|
|
|
&acts->rule_acts[action_pos]))
|
|
|
|
goto err;
|
|
|
|
} else if (__flow_hw_act_data_general_append
|
|
|
|
(priv, acts, actions->type,
|
|
|
|
actions - action_start, action_pos)) {
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
break;
|
2022-02-24 13:40:44 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_END:
|
|
|
|
actions_end = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2022-10-20 15:41:38 +00:00
|
|
|
if (mhdr.pos != UINT16_MAX) {
|
|
|
|
uint32_t flags;
|
|
|
|
uint32_t bulk_size;
|
|
|
|
size_t mhdr_len;
|
|
|
|
|
|
|
|
acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr),
|
|
|
|
0, SOCKET_ID_ANY);
|
|
|
|
if (!acts->mhdr)
|
|
|
|
goto err;
|
|
|
|
rte_memcpy(acts->mhdr, &mhdr, sizeof(*acts->mhdr));
|
|
|
|
mhdr_len = sizeof(struct mlx5_modification_cmd) * acts->mhdr->mhdr_cmds_num;
|
|
|
|
flags = mlx5_hw_act_flag[!!attr->group][type];
|
|
|
|
if (acts->mhdr->shared) {
|
|
|
|
flags |= MLX5DR_ACTION_FLAG_SHARED;
|
|
|
|
bulk_size = 0;
|
|
|
|
} else {
|
|
|
|
bulk_size = rte_log2_u32(table_attr->nb_flows);
|
|
|
|
}
|
|
|
|
acts->mhdr->action = mlx5dr_action_create_modify_header
|
|
|
|
(priv->dr_ctx, mhdr_len, (__be64 *)acts->mhdr->mhdr_cmds,
|
|
|
|
bulk_size, flags);
|
|
|
|
if (!acts->mhdr->action)
|
|
|
|
goto err;
|
|
|
|
acts->rule_acts[acts->mhdr->pos].action = acts->mhdr->action;
|
|
|
|
}
|
2022-10-20 15:41:43 +00:00
|
|
|
if (reformat_used) {
|
2022-02-24 13:40:51 +00:00
|
|
|
uint8_t buf[MLX5_ENCAP_MAX_LEN];
|
2022-10-20 15:41:37 +00:00
|
|
|
bool shared_rfmt = true;
|
2022-02-24 13:40:51 +00:00
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
MLX5_ASSERT(at->reformat_off != UINT16_MAX);
|
2022-02-24 13:40:51 +00:00
|
|
|
if (enc_item) {
|
|
|
|
MLX5_ASSERT(!encap_data);
|
2022-10-20 15:41:37 +00:00
|
|
|
if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error))
|
2022-02-24 13:40:51 +00:00
|
|
|
goto err;
|
|
|
|
encap_data = buf;
|
2022-10-20 15:41:37 +00:00
|
|
|
if (!enc_item_m)
|
|
|
|
shared_rfmt = false;
|
|
|
|
} else if (encap_data && !encap_data_m) {
|
|
|
|
shared_rfmt = false;
|
2022-02-24 13:40:51 +00:00
|
|
|
}
|
|
|
|
acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO,
|
|
|
|
sizeof(*acts->encap_decap) + data_size,
|
|
|
|
0, SOCKET_ID_ANY);
|
|
|
|
if (!acts->encap_decap)
|
|
|
|
goto err;
|
|
|
|
if (data_size) {
|
|
|
|
acts->encap_decap->data_size = data_size;
|
|
|
|
memcpy(acts->encap_decap->data, encap_data, data_size);
|
|
|
|
}
|
|
|
|
acts->encap_decap->action = mlx5dr_action_create_reformat
|
|
|
|
(priv->dr_ctx, refmt_type,
|
|
|
|
data_size, encap_data,
|
2022-10-20 15:41:37 +00:00
|
|
|
shared_rfmt ? 0 : rte_log2_u32(table_attr->nb_flows),
|
|
|
|
mlx5_hw_act_flag[!!attr->group][type] |
|
|
|
|
(shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0));
|
2022-02-24 13:40:51 +00:00
|
|
|
if (!acts->encap_decap->action)
|
|
|
|
goto err;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[at->reformat_off].action = acts->encap_decap->action;
|
|
|
|
acts->rule_acts[at->reformat_off].reformat.data = acts->encap_decap->data;
|
2022-10-20 15:41:37 +00:00
|
|
|
if (shared_rfmt)
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->rule_acts[at->reformat_off].reformat.offset = 0;
|
2022-10-20 15:41:37 +00:00
|
|
|
else if (__flow_hw_act_data_encap_append(priv, acts,
|
|
|
|
(action_start + reformat_src)->type,
|
2022-10-20 15:41:43 +00:00
|
|
|
reformat_src, at->reformat_off, data_size))
|
2022-10-20 15:41:37 +00:00
|
|
|
goto err;
|
|
|
|
acts->encap_decap->shared = shared_rfmt;
|
2022-10-20 15:41:43 +00:00
|
|
|
acts->encap_decap_pos = at->reformat_off;
|
2022-02-24 13:40:51 +00:00
|
|
|
}
|
2022-02-24 13:40:44 +00:00
|
|
|
return 0;
|
2022-02-24 13:40:47 +00:00
|
|
|
err:
|
|
|
|
err = rte_errno;
|
|
|
|
__flow_hw_action_template_destroy(dev, acts);
|
|
|
|
return rte_flow_error_set(error, err,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to create rte table");
|
2022-02-24 13:40:44 +00:00
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
/**
|
|
|
|
* Translate rte_flow actions to DR action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] tbl
|
|
|
|
* Pointer to the flow template table.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_actions_translate(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_template_table *tbl,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < tbl->nb_action_templates; i++) {
|
|
|
|
if (__flow_hw_actions_translate(dev, &tbl->cfg,
|
|
|
|
&tbl->ats[i].acts,
|
|
|
|
tbl->ats[i].action_template,
|
|
|
|
error))
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
err:
|
|
|
|
while (i--)
|
|
|
|
__flow_hw_action_template_destroy(dev, &tbl->ats[i].acts);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
/**
|
|
|
|
* Get shared indirect action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev data structure.
|
|
|
|
* @param[in] act_data
|
|
|
|
* Pointer to the recorded action construct data.
|
|
|
|
* @param[in] item_flags
|
|
|
|
* The matcher itme_flags used for RSS lookup.
|
|
|
|
* @param[in] rule_act
|
|
|
|
* Pointer to the shared action's destination rule DR action.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_shared_action_get(struct rte_eth_dev *dev,
|
|
|
|
struct mlx5_action_construct_data *act_data,
|
|
|
|
const uint64_t item_flags,
|
|
|
|
struct mlx5dr_rule_action *rule_act)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_flow_rss_desc rss_desc = { 0 };
|
|
|
|
uint64_t hash_fields = 0;
|
|
|
|
uint32_t hrxq_idx = 0;
|
|
|
|
struct mlx5_hrxq *hrxq = NULL;
|
|
|
|
int act_type = act_data->type;
|
|
|
|
|
|
|
|
switch (act_type) {
|
|
|
|
case MLX5_RTE_FLOW_ACTION_TYPE_RSS:
|
|
|
|
rss_desc.level = act_data->shared_rss.level;
|
|
|
|
rss_desc.types = act_data->shared_rss.types;
|
|
|
|
flow_dv_hashfields_set(item_flags, &rss_desc, &hash_fields);
|
|
|
|
hrxq_idx = flow_dv_action_rss_hrxq_lookup
|
|
|
|
(dev, act_data->shared_rss.idx, hash_fields);
|
|
|
|
if (hrxq_idx)
|
|
|
|
hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
|
|
|
|
hrxq_idx);
|
|
|
|
if (hrxq) {
|
|
|
|
rule_act->action = hrxq->action;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRV_LOG(WARNING, "Unsupported shared action type:%d",
|
|
|
|
act_data->type);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Construct shared indirect action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev data structure.
|
2022-10-20 15:41:44 +00:00
|
|
|
* @param[in] queue
|
|
|
|
* The flow creation queue index.
|
2022-02-24 13:40:50 +00:00
|
|
|
* @param[in] action
|
|
|
|
* Pointer to the shared indirect rte_flow action.
|
|
|
|
* @param[in] table
|
|
|
|
* Pointer to the flow table.
|
|
|
|
* @param[in] it_idx
|
|
|
|
* Item template index the action template refer to.
|
|
|
|
* @param[in] rule_act
|
|
|
|
* Pointer to the shared action's destination rule DR action.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
2022-10-20 15:41:44 +00:00
|
|
|
flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue,
|
2022-02-24 13:40:50 +00:00
|
|
|
const struct rte_flow_action *action,
|
|
|
|
struct rte_flow_template_table *table,
|
|
|
|
const uint8_t it_idx,
|
|
|
|
struct mlx5dr_rule_action *rule_act)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_action_construct_data act_data;
|
|
|
|
struct mlx5_shared_action_rss *shared_rss;
|
|
|
|
uint32_t act_idx = (uint32_t)(uintptr_t)action->conf;
|
|
|
|
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
|
|
|
|
uint32_t idx = act_idx &
|
|
|
|
((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1);
|
|
|
|
uint64_t item_flags;
|
|
|
|
|
|
|
|
memset(&act_data, 0, sizeof(act_data));
|
|
|
|
switch (type) {
|
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_RSS:
|
|
|
|
act_data.type = MLX5_RTE_FLOW_ACTION_TYPE_RSS;
|
|
|
|
shared_rss = mlx5_ipool_get
|
|
|
|
(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx);
|
|
|
|
if (!shared_rss)
|
|
|
|
return -1;
|
|
|
|
act_data.shared_rss.idx = idx;
|
|
|
|
act_data.shared_rss.level = shared_rss->origin.level;
|
|
|
|
act_data.shared_rss.types = !shared_rss->origin.types ?
|
|
|
|
RTE_ETH_RSS_IP :
|
|
|
|
shared_rss->origin.types;
|
|
|
|
item_flags = table->its[it_idx]->item_flags;
|
|
|
|
if (flow_hw_shared_action_get
|
|
|
|
(dev, &act_data, item_flags, rule_act))
|
|
|
|
return -1;
|
|
|
|
break;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_COUNT:
|
|
|
|
if (mlx5_hws_cnt_pool_get_action_offset(priv->hws_cpool,
|
|
|
|
act_idx,
|
|
|
|
&rule_act->action,
|
|
|
|
&rule_act->counter.offset))
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_CT:
|
|
|
|
if (flow_hw_ct_compile(dev, queue, idx, rule_act))
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-02-24 13:40:50 +00:00
|
|
|
default:
|
|
|
|
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:38 +00:00
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_mhdr_cmd_is_nop(const struct mlx5_modification_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct mlx5_modification_cmd cmd_he = {
|
|
|
|
.data0 = rte_be_to_cpu_32(cmd->data0),
|
|
|
|
.data1 = 0,
|
|
|
|
};
|
|
|
|
|
|
|
|
return cmd_he.action_type == MLX5_MODIFICATION_TYPE_NOP;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Construct flow action array.
|
|
|
|
*
|
|
|
|
* For action template contains dynamic actions, these actions need to
|
|
|
|
* be updated according to the rte_flow action during flow creation.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] job
|
|
|
|
* Pointer to job descriptor.
|
|
|
|
* @param[in] hw_acts
|
|
|
|
* Pointer to translated actions from template.
|
|
|
|
* @param[in] it_idx
|
|
|
|
* Item template index the action template refer to.
|
|
|
|
* @param[in] actions
|
|
|
|
* Array of rte_flow action need to be checked.
|
|
|
|
* @param[in] rule_acts
|
|
|
|
* Array of DR rule actions to be used during flow creation..
|
|
|
|
* @param[in] acts_num
|
|
|
|
* Pointer to the real acts_num flow has.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
|
|
|
flow_hw_modify_field_construct(struct mlx5_hw_q_job *job,
|
|
|
|
struct mlx5_action_construct_data *act_data,
|
|
|
|
const struct mlx5_hw_actions *hw_acts,
|
|
|
|
const struct rte_flow_action *action)
|
|
|
|
{
|
|
|
|
const struct rte_flow_action_modify_field *mhdr_action = action->conf;
|
|
|
|
uint8_t values[16] = { 0 };
|
|
|
|
unaligned_uint32_t *value_p;
|
|
|
|
uint32_t i;
|
|
|
|
struct field_modify_info *field;
|
|
|
|
|
|
|
|
if (!hw_acts->mhdr)
|
|
|
|
return -1;
|
|
|
|
if (hw_acts->mhdr->shared || act_data->modify_header.shared)
|
|
|
|
return 0;
|
|
|
|
MLX5_ASSERT(mhdr_action->operation == RTE_FLOW_MODIFY_SET ||
|
|
|
|
mhdr_action->operation == RTE_FLOW_MODIFY_ADD);
|
|
|
|
if (mhdr_action->src.field != RTE_FLOW_FIELD_VALUE &&
|
|
|
|
mhdr_action->src.field != RTE_FLOW_FIELD_POINTER)
|
|
|
|
return 0;
|
|
|
|
if (mhdr_action->src.field == RTE_FLOW_FIELD_VALUE)
|
|
|
|
rte_memcpy(values, &mhdr_action->src.value, sizeof(values));
|
|
|
|
else
|
|
|
|
rte_memcpy(values, mhdr_action->src.pvalue, sizeof(values));
|
|
|
|
if (mhdr_action->dst.field == RTE_FLOW_FIELD_META ||
|
2022-10-20 15:41:40 +00:00
|
|
|
mhdr_action->dst.field == RTE_FLOW_FIELD_TAG ||
|
|
|
|
mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
|
2022-10-20 15:41:38 +00:00
|
|
|
value_p = (unaligned_uint32_t *)values;
|
|
|
|
*value_p = rte_cpu_to_be_32(*value_p);
|
|
|
|
} else if (mhdr_action->dst.field == RTE_FLOW_FIELD_GTP_PSC_QFI) {
|
|
|
|
uint32_t tmp;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* QFI is passed as an uint8_t integer, but it is accessed through
|
|
|
|
* a 2nd least significant byte of a 32-bit field in modify header command.
|
|
|
|
*/
|
|
|
|
tmp = values[0];
|
|
|
|
value_p = (unaligned_uint32_t *)values;
|
|
|
|
*value_p = rte_cpu_to_be_32(tmp << 8);
|
|
|
|
}
|
|
|
|
i = act_data->modify_header.mhdr_cmds_off;
|
|
|
|
field = act_data->modify_header.field;
|
|
|
|
do {
|
|
|
|
uint32_t off_b;
|
|
|
|
uint32_t mask;
|
|
|
|
uint32_t data;
|
|
|
|
const uint8_t *mask_src;
|
|
|
|
|
|
|
|
if (i >= act_data->modify_header.mhdr_cmds_end)
|
|
|
|
return -1;
|
|
|
|
if (flow_hw_mhdr_cmd_is_nop(&job->mhdr_cmd[i])) {
|
|
|
|
++i;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
mask_src = (const uint8_t *)act_data->modify_header.mask;
|
|
|
|
mask = flow_dv_fetch_field(mask_src + field->offset, field->size);
|
|
|
|
if (!mask) {
|
|
|
|
++field;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
off_b = rte_bsf32(mask);
|
|
|
|
data = flow_dv_fetch_field(values + field->offset, field->size);
|
|
|
|
data = (data & mask) >> off_b;
|
|
|
|
job->mhdr_cmd[i++].data1 = rte_cpu_to_be_32(data);
|
|
|
|
++field;
|
|
|
|
} while (field->size);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:45 +00:00
|
|
|
/**
|
|
|
|
* Construct flow action array.
|
|
|
|
*
|
|
|
|
* For action template contains dynamic actions, these actions need to
|
|
|
|
* be updated according to the rte_flow action during flow creation.
|
|
|
|
*
|
2022-02-24 13:40:47 +00:00
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] job
|
|
|
|
* Pointer to job descriptor.
|
2022-02-24 13:40:45 +00:00
|
|
|
* @param[in] hw_acts
|
|
|
|
* Pointer to translated actions from template.
|
2022-02-24 13:40:50 +00:00
|
|
|
* @param[in] it_idx
|
|
|
|
* Item template index the action template refer to.
|
2022-02-24 13:40:45 +00:00
|
|
|
* @param[in] actions
|
|
|
|
* Array of rte_flow action need to be checked.
|
|
|
|
* @param[in] rule_acts
|
|
|
|
* Array of DR rule actions to be used during flow creation..
|
|
|
|
* @param[in] acts_num
|
|
|
|
* Pointer to the real acts_num flow has.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static __rte_always_inline int
|
2022-02-24 13:40:47 +00:00
|
|
|
flow_hw_actions_construct(struct rte_eth_dev *dev,
|
|
|
|
struct mlx5_hw_q_job *job,
|
2022-10-20 15:41:43 +00:00
|
|
|
const struct mlx5_hw_action_template *hw_at,
|
2022-02-24 13:40:50 +00:00
|
|
|
const uint8_t it_idx,
|
2022-02-24 13:40:45 +00:00
|
|
|
const struct rte_flow_action actions[],
|
|
|
|
struct mlx5dr_rule_action *rule_acts,
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
uint32_t queue)
|
2022-02-24 13:40:45 +00:00
|
|
|
{
|
2022-10-20 15:41:39 +00:00
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-02-24 13:40:47 +00:00
|
|
|
struct rte_flow_template_table *table = job->flow->table;
|
|
|
|
struct mlx5_action_construct_data *act_data;
|
2022-10-20 15:41:43 +00:00
|
|
|
const struct rte_flow_actions_template *at = hw_at->action_template;
|
|
|
|
const struct mlx5_hw_actions *hw_acts = &hw_at->acts;
|
2022-02-24 13:40:47 +00:00
|
|
|
const struct rte_flow_action *action;
|
2022-02-24 13:40:51 +00:00
|
|
|
const struct rte_flow_action_raw_encap *raw_encap_data;
|
|
|
|
const struct rte_flow_item *enc_item = NULL;
|
2022-10-20 15:41:39 +00:00
|
|
|
const struct rte_flow_action_ethdev *port_action = NULL;
|
2022-10-20 15:41:41 +00:00
|
|
|
const struct rte_flow_action_meter *meter = NULL;
|
2022-02-24 13:40:51 +00:00
|
|
|
uint8_t *buf = job->encap_data;
|
2022-02-24 13:40:47 +00:00
|
|
|
struct rte_flow_attr attr = {
|
|
|
|
.ingress = 1,
|
|
|
|
};
|
2022-02-24 13:40:48 +00:00
|
|
|
uint32_t ft_flag;
|
2022-10-20 15:41:37 +00:00
|
|
|
size_t encap_len = 0;
|
2022-10-20 15:41:38 +00:00
|
|
|
int ret;
|
2022-10-20 15:41:41 +00:00
|
|
|
struct mlx5_aso_mtr *mtr;
|
|
|
|
uint32_t mtr_id;
|
2022-02-24 13:40:45 +00:00
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
|
2022-02-24 13:40:47 +00:00
|
|
|
attr.group = table->grp->group_id;
|
2022-02-24 13:40:48 +00:00
|
|
|
ft_flag = mlx5_hw_act_flag[!!table->grp->group_id][table->type];
|
2022-02-24 13:40:47 +00:00
|
|
|
if (table->type == MLX5DR_TABLE_TYPE_FDB) {
|
|
|
|
attr.transfer = 1;
|
|
|
|
attr.ingress = 1;
|
|
|
|
} else if (table->type == MLX5DR_TABLE_TYPE_NIC_TX) {
|
|
|
|
attr.egress = 1;
|
|
|
|
attr.ingress = 0;
|
|
|
|
} else {
|
|
|
|
attr.ingress = 1;
|
|
|
|
}
|
2022-10-20 15:41:38 +00:00
|
|
|
if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) {
|
|
|
|
uint16_t pos = hw_acts->mhdr->pos;
|
|
|
|
|
|
|
|
if (!hw_acts->mhdr->shared) {
|
|
|
|
rule_acts[pos].modify_header.offset =
|
|
|
|
job->flow->idx - 1;
|
|
|
|
rule_acts[pos].modify_header.data =
|
|
|
|
(uint8_t *)job->mhdr_cmd;
|
|
|
|
rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
|
|
|
|
sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
|
|
|
|
}
|
|
|
|
}
|
2022-02-24 13:40:47 +00:00
|
|
|
LIST_FOREACH(act_data, &hw_acts->act_list, next) {
|
|
|
|
uint32_t jump_group;
|
2022-02-24 13:40:49 +00:00
|
|
|
uint32_t tag;
|
2022-02-24 13:40:50 +00:00
|
|
|
uint64_t item_flags;
|
2022-02-24 13:40:47 +00:00
|
|
|
struct mlx5_hw_jump_action *jump;
|
2022-02-24 13:40:48 +00:00
|
|
|
struct mlx5_hrxq *hrxq;
|
2022-10-20 15:41:44 +00:00
|
|
|
uint32_t ct_idx;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
cnt_id_t cnt_id;
|
2022-02-24 13:40:47 +00:00
|
|
|
|
|
|
|
action = &actions[act_data->action_src];
|
|
|
|
MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT ||
|
|
|
|
(int)action->type == act_data->type);
|
2022-02-24 13:40:50 +00:00
|
|
|
switch (act_data->type) {
|
2022-02-24 13:40:45 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_INDIRECT:
|
2022-02-24 13:40:50 +00:00
|
|
|
if (flow_hw_shared_action_construct
|
2022-10-20 15:41:44 +00:00
|
|
|
(dev, queue, action, table, it_idx,
|
2022-02-24 13:40:50 +00:00
|
|
|
&rule_acts[act_data->action_dst]))
|
|
|
|
return -1;
|
2022-02-24 13:40:45 +00:00
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VOID:
|
|
|
|
break;
|
2022-02-24 13:40:49 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_MARK:
|
|
|
|
tag = mlx5_flow_mark_set
|
|
|
|
(((const struct rte_flow_action_mark *)
|
|
|
|
(action->conf))->id);
|
|
|
|
rule_acts[act_data->action_dst].tag.value = tag;
|
|
|
|
break;
|
2022-02-24 13:40:47 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_JUMP:
|
|
|
|
jump_group = ((const struct rte_flow_action_jump *)
|
|
|
|
action->conf)->group;
|
|
|
|
jump = flow_hw_jump_action_register
|
2022-10-20 15:41:40 +00:00
|
|
|
(dev, &table->cfg, jump_group, NULL);
|
2022-02-24 13:40:47 +00:00
|
|
|
if (!jump)
|
|
|
|
return -1;
|
|
|
|
rule_acts[act_data->action_dst].action =
|
|
|
|
(!!attr.group) ? jump->hws_action : jump->root_action;
|
|
|
|
job->flow->jump = jump;
|
|
|
|
job->flow->fate_type = MLX5_FLOW_FATE_JUMP;
|
2022-02-24 13:40:45 +00:00
|
|
|
break;
|
2022-02-24 13:40:48 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_RSS:
|
|
|
|
case RTE_FLOW_ACTION_TYPE_QUEUE:
|
|
|
|
hrxq = flow_hw_tir_action_register(dev,
|
|
|
|
ft_flag,
|
|
|
|
action);
|
|
|
|
if (!hrxq)
|
|
|
|
return -1;
|
|
|
|
rule_acts[act_data->action_dst].action = hrxq->action;
|
|
|
|
job->flow->hrxq = hrxq;
|
|
|
|
job->flow->fate_type = MLX5_FLOW_FATE_QUEUE;
|
|
|
|
break;
|
2022-02-24 13:40:50 +00:00
|
|
|
case MLX5_RTE_FLOW_ACTION_TYPE_RSS:
|
|
|
|
item_flags = table->its[it_idx]->item_flags;
|
|
|
|
if (flow_hw_shared_action_get
|
|
|
|
(dev, act_data, item_flags,
|
|
|
|
&rule_acts[act_data->action_dst]))
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-02-24 13:40:51 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
|
|
|
|
enc_item = ((const struct rte_flow_action_vxlan_encap *)
|
|
|
|
action->conf)->definition;
|
2022-10-20 15:41:37 +00:00
|
|
|
if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL))
|
|
|
|
return -1;
|
2022-02-24 13:40:51 +00:00
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
|
|
|
|
enc_item = ((const struct rte_flow_action_nvgre_encap *)
|
|
|
|
action->conf)->definition;
|
2022-10-20 15:41:37 +00:00
|
|
|
if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL))
|
|
|
|
return -1;
|
2022-02-24 13:40:51 +00:00
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
|
|
|
|
raw_encap_data =
|
|
|
|
(const struct rte_flow_action_raw_encap *)
|
|
|
|
action->conf;
|
2022-10-20 15:41:37 +00:00
|
|
|
rte_memcpy((void *)buf, raw_encap_data->data, act_data->encap.len);
|
2022-02-24 13:40:51 +00:00
|
|
|
MLX5_ASSERT(raw_encap_data->size ==
|
|
|
|
act_data->encap.len);
|
|
|
|
break;
|
2022-10-20 15:41:38 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
|
|
|
|
ret = flow_hw_modify_field_construct(job,
|
|
|
|
act_data,
|
|
|
|
hw_acts,
|
|
|
|
action);
|
|
|
|
if (ret)
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-10-20 15:41:39 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
|
|
|
|
port_action = action->conf;
|
|
|
|
if (!priv->hw_vport[port_action->port_id])
|
|
|
|
return -1;
|
|
|
|
rule_acts[act_data->action_dst].action =
|
|
|
|
priv->hw_vport[port_action->port_id];
|
|
|
|
break;
|
2022-10-20 15:41:41 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_METER:
|
|
|
|
meter = action->conf;
|
|
|
|
mtr_id = meter->mtr_id;
|
|
|
|
mtr = mlx5_aso_meter_by_idx(priv, mtr_id);
|
|
|
|
rule_acts[act_data->action_dst].action =
|
|
|
|
priv->mtr_bulk.action;
|
|
|
|
rule_acts[act_data->action_dst].aso_meter.offset =
|
|
|
|
mtr->offset;
|
|
|
|
jump = flow_hw_jump_action_register
|
|
|
|
(dev, &table->cfg, mtr->fm.group, NULL);
|
|
|
|
if (!jump)
|
|
|
|
return -1;
|
|
|
|
MLX5_ASSERT
|
|
|
|
(!rule_acts[act_data->action_dst + 1].action);
|
|
|
|
rule_acts[act_data->action_dst + 1].action =
|
|
|
|
(!!attr.group) ? jump->hws_action :
|
|
|
|
jump->root_action;
|
|
|
|
job->flow->jump = jump;
|
|
|
|
job->flow->fate_type = MLX5_FLOW_FATE_JUMP;
|
|
|
|
if (mlx5_aso_mtr_wait(priv->sh, mtr))
|
|
|
|
return -1;
|
|
|
|
break;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_COUNT:
|
|
|
|
ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, &queue,
|
|
|
|
&cnt_id);
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
ret = mlx5_hws_cnt_pool_get_action_offset
|
|
|
|
(priv->hws_cpool,
|
|
|
|
cnt_id,
|
|
|
|
&rule_acts[act_data->action_dst].action,
|
|
|
|
&rule_acts[act_data->action_dst].counter.offset
|
|
|
|
);
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
job->flow->cnt_id = cnt_id;
|
|
|
|
break;
|
|
|
|
case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
|
|
|
|
ret = mlx5_hws_cnt_pool_get_action_offset
|
|
|
|
(priv->hws_cpool,
|
|
|
|
act_data->shared_counter.id,
|
|
|
|
&rule_acts[act_data->action_dst].action,
|
|
|
|
&rule_acts[act_data->action_dst].counter.offset
|
|
|
|
);
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
job->flow->cnt_id = act_data->shared_counter.id;
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_CONNTRACK:
|
|
|
|
ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
|
|
|
|
((uint32_t)(uintptr_t)action->conf);
|
|
|
|
if (flow_hw_ct_compile(dev, queue, ct_idx,
|
|
|
|
&rule_acts[act_data->action_dst]))
|
|
|
|
return -1;
|
|
|
|
break;
|
2022-02-24 13:40:45 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2022-10-20 15:41:37 +00:00
|
|
|
if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) {
|
2022-02-24 13:40:51 +00:00
|
|
|
rule_acts[hw_acts->encap_decap_pos].reformat.offset =
|
|
|
|
job->flow->idx - 1;
|
|
|
|
rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
|
|
|
|
}
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id))
|
|
|
|
job->flow->cnt_id = hw_acts->cnt_id;
|
2022-02-24 13:40:45 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
static const struct rte_flow_item *
|
|
|
|
flow_hw_get_rule_items(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_template_table *table,
|
|
|
|
const struct rte_flow_item items[],
|
|
|
|
uint8_t pattern_template_index,
|
|
|
|
struct mlx5_hw_q_job *job)
|
|
|
|
{
|
|
|
|
if (table->its[pattern_template_index]->implicit_port) {
|
|
|
|
const struct rte_flow_item *curr_item;
|
|
|
|
unsigned int nb_items;
|
|
|
|
bool found_end;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
/* Count number of pattern items. */
|
|
|
|
nb_items = 0;
|
|
|
|
found_end = false;
|
|
|
|
for (curr_item = items; !found_end; ++curr_item) {
|
|
|
|
++nb_items;
|
|
|
|
if (curr_item->type == RTE_FLOW_ITEM_TYPE_END)
|
|
|
|
found_end = true;
|
|
|
|
}
|
|
|
|
/* Prepend represented port item. */
|
|
|
|
job->port_spec = (struct rte_flow_item_ethdev){
|
|
|
|
.port_id = dev->data->port_id,
|
|
|
|
};
|
|
|
|
job->items[0] = (struct rte_flow_item){
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
|
|
|
|
.spec = &job->port_spec,
|
|
|
|
};
|
|
|
|
found_end = false;
|
|
|
|
for (i = 1; i < MLX5_HW_MAX_ITEMS && i - 1 < nb_items; ++i) {
|
|
|
|
job->items[i] = items[i - 1];
|
|
|
|
if (items[i - 1].type == RTE_FLOW_ITEM_TYPE_END) {
|
|
|
|
found_end = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (i >= MLX5_HW_MAX_ITEMS && !found_end) {
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return job->items;
|
|
|
|
}
|
|
|
|
return items;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:45 +00:00
|
|
|
/**
|
|
|
|
* Enqueue HW steering flow creation.
|
|
|
|
*
|
|
|
|
* The flow will be applied to the HW only if the postpone bit is not set or
|
|
|
|
* the extra push function is called.
|
|
|
|
* The flow creation status should be checked from dequeue result.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* The queue to create the flow.
|
|
|
|
* @param[in] attr
|
|
|
|
* Pointer to the flow operation attributes.
|
|
|
|
* @param[in] items
|
|
|
|
* Items with flow spec value.
|
|
|
|
* @param[in] pattern_template_index
|
|
|
|
* The item pattern flow follows from the table.
|
|
|
|
* @param[in] actions
|
|
|
|
* Action with flow spec value.
|
|
|
|
* @param[in] action_template_index
|
|
|
|
* The action pattern flow follows from the table.
|
|
|
|
* @param[in] user_data
|
|
|
|
* Pointer to the user_data.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Flow pointer on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct rte_flow *
|
|
|
|
flow_hw_async_flow_create(struct rte_eth_dev *dev,
|
|
|
|
uint32_t queue,
|
|
|
|
const struct rte_flow_op_attr *attr,
|
|
|
|
struct rte_flow_template_table *table,
|
|
|
|
const struct rte_flow_item items[],
|
|
|
|
uint8_t pattern_template_index,
|
|
|
|
const struct rte_flow_action actions[],
|
|
|
|
uint8_t action_template_index,
|
|
|
|
void *user_data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5dr_rule_attr rule_attr = {
|
|
|
|
.queue_id = queue,
|
|
|
|
.user_data = user_data,
|
|
|
|
.burst = attr->postpone,
|
|
|
|
};
|
|
|
|
struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS];
|
|
|
|
struct rte_flow_hw *flow;
|
|
|
|
struct mlx5_hw_q_job *job;
|
2022-10-20 15:41:39 +00:00
|
|
|
const struct rte_flow_item *rule_items;
|
2022-10-20 15:41:43 +00:00
|
|
|
uint32_t flow_idx;
|
2022-02-24 13:40:45 +00:00
|
|
|
int ret;
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
if (unlikely((!dev->data->dev_started))) {
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto error;
|
|
|
|
}
|
2022-02-24 13:40:45 +00:00
|
|
|
if (unlikely(!priv->hw_q[queue].job_idx)) {
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
flow = mlx5_ipool_zmalloc(table->flow, &flow_idx);
|
|
|
|
if (!flow)
|
|
|
|
goto error;
|
|
|
|
/*
|
|
|
|
* Set the table here in order to know the destination table
|
|
|
|
* when free the flow afterwards.
|
|
|
|
*/
|
|
|
|
flow->table = table;
|
|
|
|
flow->idx = flow_idx;
|
|
|
|
job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
|
|
|
|
/*
|
|
|
|
* Set the job type here in order to know if the flow memory
|
|
|
|
* should be freed or not when get the result from dequeue.
|
|
|
|
*/
|
|
|
|
job->type = MLX5_HW_Q_JOB_TYPE_CREATE;
|
|
|
|
job->flow = flow;
|
|
|
|
job->user_data = user_data;
|
|
|
|
rule_attr.user_data = job;
|
2022-10-20 15:41:43 +00:00
|
|
|
/*
|
|
|
|
* Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule
|
|
|
|
* insertion hints.
|
|
|
|
*/
|
|
|
|
MLX5_ASSERT(flow_idx > 0);
|
|
|
|
rule_attr.rule_idx = flow_idx - 1;
|
2022-10-20 15:41:40 +00:00
|
|
|
/*
|
|
|
|
* Construct the flow actions based on the input actions.
|
|
|
|
* The implicitly appended action is always fixed, like metadata
|
|
|
|
* copy action from FDB to NIC Rx.
|
|
|
|
* No need to copy and contrust a new "actions" list based on the
|
|
|
|
* user's input, in order to save the cost.
|
|
|
|
*/
|
2022-10-20 15:41:43 +00:00
|
|
|
if (flow_hw_actions_construct(dev, job, &table->ats[action_template_index],
|
|
|
|
pattern_template_index, actions, rule_acts, queue)) {
|
2022-10-20 15:41:39 +00:00
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto free;
|
|
|
|
}
|
|
|
|
rule_items = flow_hw_get_rule_items(dev, table, items,
|
|
|
|
pattern_template_index, job);
|
|
|
|
if (!rule_items)
|
|
|
|
goto free;
|
2022-02-24 13:40:45 +00:00
|
|
|
ret = mlx5dr_rule_create(table->matcher,
|
2022-10-20 15:41:43 +00:00
|
|
|
pattern_template_index, rule_items,
|
2022-10-20 15:57:48 +00:00
|
|
|
action_template_index, rule_acts,
|
|
|
|
&rule_attr, (struct mlx5dr_rule *)flow->rule);
|
2022-02-24 13:40:45 +00:00
|
|
|
if (likely(!ret))
|
|
|
|
return (struct rte_flow *)flow;
|
2022-10-20 15:41:39 +00:00
|
|
|
free:
|
2022-02-24 13:40:45 +00:00
|
|
|
/* Flow created fail, return the descriptor and flow memory. */
|
|
|
|
mlx5_ipool_free(table->flow, flow_idx);
|
|
|
|
priv->hw_q[queue].job_idx++;
|
|
|
|
error:
|
|
|
|
rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to create rte flow");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Enqueue HW steering flow destruction.
|
|
|
|
*
|
|
|
|
* The flow will be applied to the HW only if the postpone bit is not set or
|
|
|
|
* the extra push function is called.
|
|
|
|
* The flow destruction status should be checked from dequeue result.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* The queue to destroy the flow.
|
|
|
|
* @param[in] attr
|
|
|
|
* Pointer to the flow operation attributes.
|
|
|
|
* @param[in] flow
|
|
|
|
* Pointer to the flow to be destroyed.
|
|
|
|
* @param[in] user_data
|
|
|
|
* Pointer to the user_data.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
|
|
|
|
uint32_t queue,
|
|
|
|
const struct rte_flow_op_attr *attr,
|
|
|
|
struct rte_flow *flow,
|
|
|
|
void *user_data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5dr_rule_attr rule_attr = {
|
|
|
|
.queue_id = queue,
|
|
|
|
.user_data = user_data,
|
|
|
|
.burst = attr->postpone,
|
|
|
|
};
|
|
|
|
struct rte_flow_hw *fh = (struct rte_flow_hw *)flow;
|
|
|
|
struct mlx5_hw_q_job *job;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (unlikely(!priv->hw_q[queue].job_idx)) {
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
|
|
|
|
job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
|
|
|
|
job->user_data = user_data;
|
|
|
|
job->flow = fh;
|
|
|
|
rule_attr.user_data = job;
|
2022-10-20 15:57:48 +00:00
|
|
|
ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr);
|
2022-02-24 13:40:45 +00:00
|
|
|
if (likely(!ret))
|
|
|
|
return 0;
|
|
|
|
priv->hw_q[queue].job_idx++;
|
|
|
|
error:
|
|
|
|
return rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to create rte flow");
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Pull the enqueued flows.
|
|
|
|
*
|
|
|
|
* For flows enqueued from creation/destruction, the status should be
|
|
|
|
* checked from the dequeue result.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* The queue to pull the result.
|
|
|
|
* @param[in/out] res
|
|
|
|
* Array to save the results.
|
|
|
|
* @param[in] n_res
|
|
|
|
* Available result with the array.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Result number on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_pull(struct rte_eth_dev *dev,
|
|
|
|
uint32_t queue,
|
|
|
|
struct rte_flow_op_result res[],
|
|
|
|
uint16_t n_res,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_hw_q_job *job;
|
|
|
|
int ret, i;
|
|
|
|
|
|
|
|
ret = mlx5dr_send_queue_poll(priv->dr_ctx, queue, res, n_res);
|
|
|
|
if (ret < 0)
|
|
|
|
return rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to query flow queue");
|
|
|
|
for (i = 0; i < ret; i++) {
|
|
|
|
job = (struct mlx5_hw_q_job *)res[i].user_data;
|
|
|
|
/* Restore user data. */
|
|
|
|
res[i].user_data = job->user_data;
|
2022-02-24 13:40:47 +00:00
|
|
|
if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
|
|
|
|
if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP)
|
|
|
|
flow_hw_jump_release(dev, job->flow->jump);
|
2022-02-24 13:40:48 +00:00
|
|
|
else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE)
|
|
|
|
mlx5_hrxq_obj_release(dev, job->flow->hrxq);
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (mlx5_hws_cnt_id_valid(job->flow->cnt_id) &&
|
|
|
|
mlx5_hws_cnt_is_shared
|
|
|
|
(priv->hws_cpool, job->flow->cnt_id) == false) {
|
|
|
|
mlx5_hws_cnt_pool_put(priv->hws_cpool, &queue,
|
|
|
|
&job->flow->cnt_id);
|
|
|
|
job->flow->cnt_id = 0;
|
|
|
|
}
|
2022-02-24 13:40:45 +00:00
|
|
|
mlx5_ipool_free(job->flow->table->flow, job->flow->idx);
|
2022-02-24 13:40:47 +00:00
|
|
|
}
|
2022-02-24 13:40:45 +00:00
|
|
|
priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Push the enqueued flows to HW.
|
|
|
|
*
|
|
|
|
* Force apply all the enqueued flows to the HW.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* The queue to push the flow.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_push(struct rte_eth_dev *dev,
|
|
|
|
uint32_t queue,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = mlx5dr_send_queue_action(priv->dr_ctx, queue,
|
|
|
|
MLX5DR_SEND_QUEUE_ACTION_DRAIN);
|
|
|
|
if (ret) {
|
|
|
|
rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to push flows");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:46 +00:00
|
|
|
/**
|
|
|
|
* Drain the enqueued flows' completion.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* The queue to pull the flow.
|
|
|
|
* @param[in] pending_rules
|
|
|
|
* The pending flow number.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
__flow_hw_pull_comp(struct rte_eth_dev *dev,
|
|
|
|
uint32_t queue,
|
|
|
|
uint32_t pending_rules,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct rte_flow_op_result comp[BURST_THR];
|
|
|
|
int ret, i, empty_loop = 0;
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
ret = flow_hw_push(dev, queue, error);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
2022-02-24 13:40:46 +00:00
|
|
|
while (pending_rules) {
|
|
|
|
ret = flow_hw_pull(dev, queue, comp, BURST_THR, error);
|
|
|
|
if (ret < 0)
|
|
|
|
return -1;
|
|
|
|
if (!ret) {
|
|
|
|
rte_delay_us_sleep(20000);
|
|
|
|
if (++empty_loop > 5) {
|
|
|
|
DRV_LOG(WARNING, "No available dequeue, quit.");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
for (i = 0; i < ret; i++) {
|
|
|
|
if (comp[i].status == RTE_FLOW_OP_ERROR)
|
|
|
|
DRV_LOG(WARNING, "Flow flush get error CQE.");
|
|
|
|
}
|
|
|
|
if ((uint32_t)ret > pending_rules) {
|
|
|
|
DRV_LOG(WARNING, "Flow flush get extra CQE.");
|
|
|
|
return rte_flow_error_set(error, ERANGE,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"get extra CQE");
|
|
|
|
}
|
|
|
|
pending_rules -= ret;
|
|
|
|
empty_loop = 0;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Flush created flows.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
flow_hw_q_flow_flush(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_hw_q *hw_q;
|
|
|
|
struct rte_flow_template_table *tbl;
|
|
|
|
struct rte_flow_hw *flow;
|
|
|
|
struct rte_flow_op_attr attr = {
|
|
|
|
.postpone = 0,
|
|
|
|
};
|
|
|
|
uint32_t pending_rules = 0;
|
|
|
|
uint32_t queue;
|
|
|
|
uint32_t fidx;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure to push and dequeue all the enqueued flow
|
|
|
|
* creation/destruction jobs in case user forgot to
|
|
|
|
* dequeue. Or the enqueued created flows will be
|
|
|
|
* leaked. The forgotten dequeues would also cause
|
|
|
|
* flow flush get extra CQEs as expected and pending_rules
|
|
|
|
* be minus value.
|
|
|
|
*/
|
|
|
|
for (queue = 0; queue < priv->nb_queue; queue++) {
|
|
|
|
hw_q = &priv->hw_q[queue];
|
|
|
|
if (__flow_hw_pull_comp(dev, queue, hw_q->size - hw_q->job_idx,
|
|
|
|
error))
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
/* Flush flow per-table from MLX5_DEFAULT_FLUSH_QUEUE. */
|
|
|
|
hw_q = &priv->hw_q[MLX5_DEFAULT_FLUSH_QUEUE];
|
|
|
|
LIST_FOREACH(tbl, &priv->flow_hw_tbl, next) {
|
2022-10-20 15:41:40 +00:00
|
|
|
if (!tbl->cfg.external)
|
|
|
|
continue;
|
2022-02-24 13:40:46 +00:00
|
|
|
MLX5_IPOOL_FOREACH(tbl->flow, fidx, flow) {
|
|
|
|
if (flow_hw_async_flow_destroy(dev,
|
|
|
|
MLX5_DEFAULT_FLUSH_QUEUE,
|
|
|
|
&attr,
|
|
|
|
(struct rte_flow *)flow,
|
|
|
|
NULL,
|
|
|
|
error))
|
|
|
|
return -1;
|
|
|
|
pending_rules++;
|
|
|
|
/* Drain completion with queue size. */
|
|
|
|
if (pending_rules >= hw_q->size) {
|
|
|
|
if (__flow_hw_pull_comp(dev,
|
|
|
|
MLX5_DEFAULT_FLUSH_QUEUE,
|
|
|
|
pending_rules, error))
|
|
|
|
return -1;
|
|
|
|
pending_rules = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Drain left completion. */
|
|
|
|
if (pending_rules &&
|
|
|
|
__flow_hw_pull_comp(dev, MLX5_DEFAULT_FLUSH_QUEUE, pending_rules,
|
|
|
|
error))
|
|
|
|
return -1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:44 +00:00
|
|
|
/**
|
|
|
|
* Create flow table.
|
|
|
|
*
|
|
|
|
* The input item and action templates will be binded to the table.
|
|
|
|
* Flow memory will also be allocated. Matcher will be created based
|
|
|
|
* on the item template. Action will be translated to the dedicated
|
|
|
|
* DR action if possible.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
2022-10-20 15:41:40 +00:00
|
|
|
* @param[in] table_cfg
|
|
|
|
* Pointer to the table configuration.
|
2022-02-24 13:40:44 +00:00
|
|
|
* @param[in] item_templates
|
|
|
|
* Item template array to be binded to the table.
|
|
|
|
* @param[in] nb_item_templates
|
|
|
|
* Number of item template.
|
|
|
|
* @param[in] action_templates
|
|
|
|
* Action template array to be binded to the table.
|
|
|
|
* @param[in] nb_action_templates
|
|
|
|
* Number of action template.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Table on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_template_table *
|
|
|
|
flow_hw_table_create(struct rte_eth_dev *dev,
|
2022-10-20 15:41:40 +00:00
|
|
|
const struct mlx5_flow_template_table_cfg *table_cfg,
|
2022-02-24 13:40:44 +00:00
|
|
|
struct rte_flow_pattern_template *item_templates[],
|
|
|
|
uint8_t nb_item_templates,
|
|
|
|
struct rte_flow_actions_template *action_templates[],
|
|
|
|
uint8_t nb_action_templates,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5dr_matcher_attr matcher_attr = {0};
|
|
|
|
struct rte_flow_template_table *tbl = NULL;
|
|
|
|
struct mlx5_flow_group *grp;
|
|
|
|
struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
|
2022-10-20 15:41:43 +00:00
|
|
|
struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
|
2022-10-20 15:41:40 +00:00
|
|
|
const struct rte_flow_template_table_attr *attr = &table_cfg->attr;
|
2022-02-24 13:40:44 +00:00
|
|
|
struct rte_flow_attr flow_attr = attr->flow_attr;
|
|
|
|
struct mlx5_flow_cb_ctx ctx = {
|
|
|
|
.dev = dev,
|
|
|
|
.error = error,
|
|
|
|
.data = &flow_attr,
|
|
|
|
};
|
|
|
|
struct mlx5_indexed_pool_config cfg = {
|
2022-10-20 15:57:48 +00:00
|
|
|
.size = sizeof(struct rte_flow_hw) + mlx5dr_rule_get_handle_size(),
|
2022-02-24 13:40:44 +00:00
|
|
|
.trunk_size = 1 << 12,
|
|
|
|
.per_core_cache = 1 << 13,
|
|
|
|
.need_lock = 1,
|
|
|
|
.release_mem_en = !!priv->sh->config.reclaim_mode,
|
|
|
|
.malloc = mlx5_malloc,
|
|
|
|
.free = mlx5_free,
|
|
|
|
.type = "mlx5_hw_table_flow",
|
|
|
|
};
|
|
|
|
struct mlx5_list_entry *ge;
|
|
|
|
uint32_t i, max_tpl = MLX5_HW_TBL_MAX_ITEM_TEMPLATE;
|
|
|
|
uint32_t nb_flows = rte_align32pow2(attr->nb_flows);
|
2022-10-20 15:41:43 +00:00
|
|
|
bool port_started = !!dev->data->dev_started;
|
2022-02-24 13:40:44 +00:00
|
|
|
int err;
|
|
|
|
|
|
|
|
/* HWS layer accepts only 1 item template with root table. */
|
|
|
|
if (!attr->flow_attr.group)
|
|
|
|
max_tpl = 1;
|
|
|
|
cfg.max_idx = nb_flows;
|
|
|
|
/* For table has very limited flows, disable cache. */
|
|
|
|
if (nb_flows < cfg.trunk_size) {
|
|
|
|
cfg.per_core_cache = 0;
|
|
|
|
cfg.trunk_size = nb_flows;
|
2022-10-20 15:41:44 +00:00
|
|
|
} else if (nb_flows <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
|
|
|
|
cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
|
2022-02-24 13:40:44 +00:00
|
|
|
}
|
|
|
|
/* Check if we requires too many templates. */
|
|
|
|
if (nb_item_templates > max_tpl ||
|
|
|
|
nb_action_templates > MLX5_HW_TBL_MAX_ACTION_TEMPLATE) {
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
/* Allocate the table memory. */
|
|
|
|
tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl), 0, rte_socket_id());
|
|
|
|
if (!tbl)
|
|
|
|
goto error;
|
2022-10-20 15:41:40 +00:00
|
|
|
tbl->cfg = *table_cfg;
|
2022-02-24 13:40:44 +00:00
|
|
|
/* Allocate flow indexed pool. */
|
|
|
|
tbl->flow = mlx5_ipool_create(&cfg);
|
|
|
|
if (!tbl->flow)
|
|
|
|
goto error;
|
|
|
|
/* Register the flow group. */
|
|
|
|
ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx);
|
|
|
|
if (!ge)
|
|
|
|
goto error;
|
|
|
|
grp = container_of(ge, struct mlx5_flow_group, entry);
|
|
|
|
tbl->grp = grp;
|
|
|
|
/* Prepare matcher information. */
|
|
|
|
matcher_attr.priority = attr->flow_attr.priority;
|
2022-10-20 15:41:43 +00:00
|
|
|
matcher_attr.optimize_using_rule_idx = true;
|
2022-02-24 13:40:44 +00:00
|
|
|
matcher_attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_RULE;
|
|
|
|
matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
|
|
|
|
/* Build the item template. */
|
|
|
|
for (i = 0; i < nb_item_templates; i++) {
|
|
|
|
uint32_t ret;
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
if ((flow_attr.ingress && !item_templates[i]->attr.ingress) ||
|
|
|
|
(flow_attr.egress && !item_templates[i]->attr.egress) ||
|
|
|
|
(flow_attr.transfer && !item_templates[i]->attr.transfer)) {
|
|
|
|
DRV_LOG(ERR, "pattern template and template table attribute mismatch");
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto it_error;
|
|
|
|
}
|
2022-02-24 13:40:44 +00:00
|
|
|
ret = __atomic_add_fetch(&item_templates[i]->refcnt, 1,
|
|
|
|
__ATOMIC_RELAXED);
|
|
|
|
if (ret <= 1) {
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto it_error;
|
|
|
|
}
|
|
|
|
mt[i] = item_templates[i]->mt;
|
|
|
|
tbl->its[i] = item_templates[i];
|
|
|
|
}
|
|
|
|
tbl->nb_item_templates = nb_item_templates;
|
|
|
|
/* Build the action template. */
|
|
|
|
for (i = 0; i < nb_action_templates; i++) {
|
|
|
|
uint32_t ret;
|
|
|
|
|
|
|
|
ret = __atomic_add_fetch(&action_templates[i]->refcnt, 1,
|
|
|
|
__ATOMIC_RELAXED);
|
|
|
|
if (ret <= 1) {
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto at_error;
|
|
|
|
}
|
2022-10-20 15:41:43 +00:00
|
|
|
at[i] = action_templates[i]->tmpl;
|
|
|
|
tbl->ats[i].action_template = action_templates[i];
|
2022-02-24 13:40:47 +00:00
|
|
|
LIST_INIT(&tbl->ats[i].acts.act_list);
|
2022-10-20 15:41:43 +00:00
|
|
|
if (!port_started)
|
|
|
|
continue;
|
|
|
|
err = __flow_hw_actions_translate(dev, &tbl->cfg,
|
|
|
|
&tbl->ats[i].acts,
|
|
|
|
action_templates[i], error);
|
2022-02-24 13:40:44 +00:00
|
|
|
if (err) {
|
|
|
|
i++;
|
|
|
|
goto at_error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
tbl->nb_action_templates = nb_action_templates;
|
2022-10-20 15:41:43 +00:00
|
|
|
tbl->matcher = mlx5dr_matcher_create
|
|
|
|
(tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr);
|
|
|
|
if (!tbl->matcher)
|
|
|
|
goto at_error;
|
2022-02-24 13:40:44 +00:00
|
|
|
tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB :
|
|
|
|
(attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX :
|
|
|
|
MLX5DR_TABLE_TYPE_NIC_RX);
|
2022-10-20 15:41:43 +00:00
|
|
|
if (port_started)
|
|
|
|
LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next);
|
|
|
|
else
|
|
|
|
LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next);
|
2022-02-24 13:40:44 +00:00
|
|
|
return tbl;
|
|
|
|
at_error:
|
|
|
|
while (i--) {
|
2022-02-24 13:40:47 +00:00
|
|
|
__flow_hw_action_template_destroy(dev, &tbl->ats[i].acts);
|
2022-02-24 13:40:44 +00:00
|
|
|
__atomic_sub_fetch(&action_templates[i]->refcnt,
|
|
|
|
1, __ATOMIC_RELAXED);
|
|
|
|
}
|
|
|
|
i = nb_item_templates;
|
|
|
|
it_error:
|
|
|
|
while (i--)
|
|
|
|
__atomic_sub_fetch(&item_templates[i]->refcnt,
|
|
|
|
1, __ATOMIC_RELAXED);
|
|
|
|
error:
|
|
|
|
err = rte_errno;
|
|
|
|
if (tbl) {
|
|
|
|
if (tbl->grp)
|
|
|
|
mlx5_hlist_unregister(priv->sh->groups,
|
|
|
|
&tbl->grp->entry);
|
|
|
|
if (tbl->flow)
|
|
|
|
mlx5_ipool_destroy(tbl->flow);
|
|
|
|
mlx5_free(tbl);
|
|
|
|
}
|
|
|
|
rte_flow_error_set(error, err,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to create rte table");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
/**
|
|
|
|
* Update flow template table.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
flow_hw_table_update(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct rte_flow_template_table *tbl;
|
|
|
|
|
|
|
|
while ((tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo)) != NULL) {
|
|
|
|
if (flow_hw_actions_translate(dev, tbl, error))
|
|
|
|
return -1;
|
|
|
|
LIST_REMOVE(tbl, next);
|
|
|
|
LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
/**
|
|
|
|
* Translates group index specified by the user in @p attr to internal
|
|
|
|
* group index.
|
|
|
|
*
|
|
|
|
* Translation is done by incrementing group index, so group n becomes n + 1.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param[in] cfg
|
|
|
|
* Pointer to the template table configuration.
|
|
|
|
* @param[in] group
|
|
|
|
* Currently used group index (table group or jump destination).
|
|
|
|
* @param[out] table_group
|
|
|
|
* Pointer to output group index.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success. Otherwise, returns negative error code, rte_errno is set
|
|
|
|
* and error structure is filled.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_translate_group(struct rte_eth_dev *dev,
|
|
|
|
const struct mlx5_flow_template_table_cfg *cfg,
|
|
|
|
uint32_t group,
|
|
|
|
uint32_t *table_group,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
const struct rte_flow_attr *flow_attr = &cfg->attr.flow_attr;
|
|
|
|
|
|
|
|
if (priv->sh->config.dv_esw_en && cfg->external && flow_attr->transfer) {
|
|
|
|
if (group > MLX5_HW_MAX_TRANSFER_GROUP)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
|
|
|
|
NULL,
|
|
|
|
"group index not supported");
|
|
|
|
*table_group = group + 1;
|
|
|
|
} else {
|
|
|
|
*table_group = group;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Create flow table.
|
|
|
|
*
|
|
|
|
* This function is a wrapper over @ref flow_hw_table_create(), which translates parameters
|
|
|
|
* provided by user to proper internal values.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param[in] attr
|
|
|
|
* Pointer to the table attributes.
|
|
|
|
* @param[in] item_templates
|
|
|
|
* Item template array to be binded to the table.
|
|
|
|
* @param[in] nb_item_templates
|
|
|
|
* Number of item templates.
|
|
|
|
* @param[in] action_templates
|
|
|
|
* Action template array to be binded to the table.
|
|
|
|
* @param[in] nb_action_templates
|
|
|
|
* Number of action templates.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Table on success, Otherwise, returns negative error code, rte_errno is set
|
|
|
|
* and error structure is filled.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_template_table *
|
|
|
|
flow_hw_template_table_create(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_template_table_attr *attr,
|
|
|
|
struct rte_flow_pattern_template *item_templates[],
|
|
|
|
uint8_t nb_item_templates,
|
|
|
|
struct rte_flow_actions_template *action_templates[],
|
|
|
|
uint8_t nb_action_templates,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
2022-10-20 15:41:43 +00:00
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct mlx5_flow_template_table_cfg cfg = {
|
|
|
|
.attr = *attr,
|
|
|
|
.external = true,
|
|
|
|
};
|
|
|
|
uint32_t group = attr->flow_attr.group;
|
|
|
|
|
|
|
|
if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error))
|
|
|
|
return NULL;
|
2022-10-20 15:41:43 +00:00
|
|
|
if (priv->sh->config.dv_esw_en && cfg.attr.flow_attr.egress) {
|
|
|
|
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL,
|
|
|
|
"egress flows are not supported with HW Steering"
|
|
|
|
" when E-Switch is enabled");
|
|
|
|
return NULL;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates,
|
|
|
|
action_templates, nb_action_templates, error);
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:44 +00:00
|
|
|
/**
|
|
|
|
* Destroy flow table.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] table
|
|
|
|
* Pointer to the table to be destroyed.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, a negative errno value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_table_destroy(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_template_table *table,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
int i;
|
2022-10-20 15:41:39 +00:00
|
|
|
uint32_t fidx = 1;
|
2022-02-24 13:40:44 +00:00
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/* Build ipool allocated object bitmap. */
|
|
|
|
mlx5_ipool_flush_cache(table->flow);
|
|
|
|
/* Check if ipool has allocated objects. */
|
|
|
|
if (table->refcnt || mlx5_ipool_get_next(table->flow, &fidx)) {
|
2022-02-24 13:40:44 +00:00
|
|
|
DRV_LOG(WARNING, "Table %p is still in using.", (void *)table);
|
|
|
|
return rte_flow_error_set(error, EBUSY,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"table in using");
|
|
|
|
}
|
|
|
|
LIST_REMOVE(table, next);
|
|
|
|
for (i = 0; i < table->nb_item_templates; i++)
|
|
|
|
__atomic_sub_fetch(&table->its[i]->refcnt,
|
|
|
|
1, __ATOMIC_RELAXED);
|
|
|
|
for (i = 0; i < table->nb_action_templates; i++) {
|
2022-02-24 13:40:47 +00:00
|
|
|
__flow_hw_action_template_destroy(dev, &table->ats[i].acts);
|
2022-02-24 13:40:44 +00:00
|
|
|
__atomic_sub_fetch(&table->ats[i].action_template->refcnt,
|
|
|
|
1, __ATOMIC_RELAXED);
|
|
|
|
}
|
|
|
|
mlx5dr_matcher_destroy(table->matcher);
|
|
|
|
mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
|
|
|
|
mlx5_ipool_destroy(table->flow);
|
|
|
|
mlx5_free(table);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:38 +00:00
|
|
|
static bool
|
|
|
|
flow_hw_modify_field_is_used(const struct rte_flow_action_modify_field *action,
|
|
|
|
enum rte_flow_field_id field)
|
|
|
|
{
|
|
|
|
return action->src.field == field || action->dst.field == field;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
|
|
|
|
const struct rte_flow_action *mask,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
const struct rte_flow_action_modify_field *action_conf =
|
|
|
|
action->conf;
|
|
|
|
const struct rte_flow_action_modify_field *mask_conf =
|
|
|
|
mask->conf;
|
|
|
|
|
|
|
|
if (action_conf->operation != mask_conf->operation)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"modify_field operation mask and template are not equal");
|
|
|
|
if (action_conf->dst.field != mask_conf->dst.field)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"destination field mask and template are not equal");
|
|
|
|
if (action_conf->dst.field == RTE_FLOW_FIELD_POINTER ||
|
|
|
|
action_conf->dst.field == RTE_FLOW_FIELD_VALUE)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"immediate value and pointer cannot be used as destination");
|
|
|
|
if (mask_conf->dst.level != UINT32_MAX)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"destination encapsulation level must be fully masked");
|
|
|
|
if (mask_conf->dst.offset != UINT32_MAX)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"destination offset level must be fully masked");
|
|
|
|
if (action_conf->src.field != mask_conf->src.field)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"destination field mask and template are not equal");
|
|
|
|
if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
|
|
|
|
action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
|
|
|
|
if (mask_conf->src.level != UINT32_MAX)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"source encapsulation level must be fully masked");
|
|
|
|
if (mask_conf->src.offset != UINT32_MAX)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"source offset level must be fully masked");
|
|
|
|
}
|
|
|
|
if (mask_conf->width != UINT32_MAX)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"modify_field width field must be fully masked");
|
|
|
|
if (flow_hw_modify_field_is_used(action_conf, RTE_FLOW_FIELD_START))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"modifying arbitrary place in a packet is not supported");
|
|
|
|
if (flow_hw_modify_field_is_used(action_conf, RTE_FLOW_FIELD_VLAN_TYPE))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"modifying vlan_type is not supported");
|
|
|
|
if (flow_hw_modify_field_is_used(action_conf, RTE_FLOW_FIELD_GENEVE_VNI))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, action,
|
|
|
|
"modifying Geneve VNI is not supported");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2022-10-20 15:41:39 +00:00
|
|
|
flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_action *action,
|
|
|
|
const struct rte_flow_action *mask,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
const struct rte_flow_action_ethdev *action_conf = action->conf;
|
|
|
|
const struct rte_flow_action_ethdev *mask_conf = mask->conf;
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
|
|
|
|
if (!priv->sh->config.dv_esw_en)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"cannot use represented_port actions"
|
|
|
|
" without an E-Switch");
|
2022-10-20 15:41:40 +00:00
|
|
|
if (mask_conf && mask_conf->port_id) {
|
2022-10-20 15:41:39 +00:00
|
|
|
struct mlx5_priv *port_priv;
|
|
|
|
struct mlx5_priv *dev_priv;
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
if (!action_conf)
|
|
|
|
return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action, "port index was not provided");
|
2022-10-20 15:41:39 +00:00
|
|
|
port_priv = mlx5_port_to_eswitch_info(action_conf->port_id, false);
|
|
|
|
if (!port_priv)
|
|
|
|
return rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action,
|
|
|
|
"failed to obtain E-Switch"
|
|
|
|
" info for port");
|
|
|
|
dev_priv = mlx5_dev_to_eswitch_info(dev);
|
|
|
|
if (!dev_priv)
|
|
|
|
return rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action,
|
|
|
|
"failed to obtain E-Switch"
|
|
|
|
" info for transfer proxy");
|
|
|
|
if (port_priv->domain_id != dev_priv->domain_id)
|
|
|
|
return rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action,
|
|
|
|
"cannot forward to port from"
|
|
|
|
" a different E-Switch");
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
static inline int
|
|
|
|
flow_hw_action_meta_copy_insert(const struct rte_flow_action actions[],
|
|
|
|
const struct rte_flow_action masks[],
|
|
|
|
const struct rte_flow_action *ins_actions,
|
|
|
|
const struct rte_flow_action *ins_masks,
|
|
|
|
struct rte_flow_action *new_actions,
|
|
|
|
struct rte_flow_action *new_masks,
|
|
|
|
uint16_t *ins_pos)
|
|
|
|
{
|
|
|
|
uint16_t idx, total = 0;
|
|
|
|
bool ins = false;
|
|
|
|
bool act_end = false;
|
|
|
|
|
|
|
|
MLX5_ASSERT(actions && masks);
|
|
|
|
MLX5_ASSERT(new_actions && new_masks);
|
|
|
|
MLX5_ASSERT(ins_actions && ins_masks);
|
|
|
|
for (idx = 0; !act_end; idx++) {
|
|
|
|
if (idx >= MLX5_HW_MAX_ACTS)
|
|
|
|
return -1;
|
|
|
|
if (actions[idx].type == RTE_FLOW_ACTION_TYPE_RSS ||
|
|
|
|
actions[idx].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
|
|
|
|
ins = true;
|
|
|
|
*ins_pos = idx;
|
|
|
|
}
|
|
|
|
if (actions[idx].type == RTE_FLOW_ACTION_TYPE_END)
|
|
|
|
act_end = true;
|
|
|
|
}
|
|
|
|
if (!ins)
|
|
|
|
return 0;
|
|
|
|
else if (idx == MLX5_HW_MAX_ACTS)
|
|
|
|
return -1; /* No more space. */
|
|
|
|
total = idx;
|
|
|
|
/* Before the position, no change for the actions. */
|
|
|
|
for (idx = 0; idx < *ins_pos; idx++) {
|
|
|
|
new_actions[idx] = actions[idx];
|
|
|
|
new_masks[idx] = masks[idx];
|
|
|
|
}
|
|
|
|
/* Insert the new action and mask to the position. */
|
|
|
|
new_actions[idx] = *ins_actions;
|
|
|
|
new_masks[idx] = *ins_masks;
|
|
|
|
/* Remaining content is right shifted by one position. */
|
|
|
|
for (; idx < total; idx++) {
|
|
|
|
new_actions[idx + 1] = actions[idx];
|
|
|
|
new_masks[idx + 1] = masks[idx];
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
static int
|
2022-10-20 15:41:41 +00:00
|
|
|
flow_hw_actions_validate(struct rte_eth_dev *dev,
|
2022-10-20 15:41:40 +00:00
|
|
|
const struct rte_flow_actions_template_attr *attr,
|
2022-10-20 15:41:39 +00:00
|
|
|
const struct rte_flow_action actions[],
|
2022-10-20 15:41:38 +00:00
|
|
|
const struct rte_flow_action masks[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
2022-10-20 15:41:40 +00:00
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
uint16_t i;
|
2022-10-20 15:41:38 +00:00
|
|
|
bool actions_end = false;
|
|
|
|
int ret;
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
/* FDB actions are only valid to proxy port. */
|
|
|
|
if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"transfer actions are only valid to proxy port");
|
2022-10-20 15:41:38 +00:00
|
|
|
for (i = 0; !actions_end; ++i) {
|
|
|
|
const struct rte_flow_action *action = &actions[i];
|
|
|
|
const struct rte_flow_action *mask = &masks[i];
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
MLX5_ASSERT(i < MLX5_HW_MAX_ACTS);
|
2022-10-20 15:41:43 +00:00
|
|
|
if (action->type != RTE_FLOW_ACTION_TYPE_INDIRECT &&
|
|
|
|
action->type != mask->type)
|
2022-10-20 15:41:38 +00:00
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action,
|
|
|
|
"mask type does not match action type");
|
|
|
|
switch (action->type) {
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VOID:
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_INDIRECT:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_MARK:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_DROP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_JUMP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_QUEUE:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RSS:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
2022-10-20 15:41:41 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_METER:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
2022-10-20 15:41:38 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
|
|
|
|
ret = flow_hw_validate_action_modify_field(action,
|
|
|
|
mask,
|
|
|
|
error);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
break;
|
2022-10-20 15:41:39 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
|
|
|
|
ret = flow_hw_validate_action_represented_port
|
|
|
|
(dev, action, mask, error);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
break;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_COUNT:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_CONNTRACK:
|
|
|
|
/* TODO: Validation logic */
|
|
|
|
break;
|
2022-10-20 15:41:38 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_END:
|
|
|
|
actions_end = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
action,
|
|
|
|
"action not supported in template API");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
|
|
|
|
[RTE_FLOW_ACTION_TYPE_MARK] = MLX5DR_ACTION_TYP_TAG,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_DROP] = MLX5DR_ACTION_TYP_DROP,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_JUMP] = MLX5DR_ACTION_TYP_FT,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_QUEUE] = MLX5DR_ACTION_TYP_TIR,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_RSS] = MLX5DR_ACTION_TYP_TIR,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = MLX5DR_ACTION_TYP_L2_TO_TNL_L2,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = MLX5DR_ACTION_TYP_L2_TO_TNL_L2,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = MLX5DR_ACTION_TYP_TNL_L2_TO_L2,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_NVGRE_DECAP] = MLX5DR_ACTION_TYP_TNL_L2_TO_L2,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_MODIFY_FIELD] = MLX5DR_ACTION_TYP_MODIFY_HDR,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = MLX5DR_ACTION_TYP_VPORT,
|
|
|
|
[RTE_FLOW_ACTION_TYPE_COUNT] = MLX5DR_ACTION_TYP_CTR,
|
2022-10-20 15:41:44 +00:00
|
|
|
[RTE_FLOW_ACTION_TYPE_CONNTRACK] = MLX5DR_ACTION_TYP_ASO_CT,
|
2022-10-20 15:41:43 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask,
|
|
|
|
unsigned int action_src,
|
|
|
|
enum mlx5dr_action_type *action_types,
|
|
|
|
uint16_t *curr_off,
|
|
|
|
struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
uint32_t type;
|
|
|
|
|
|
|
|
if (!mask) {
|
|
|
|
DRV_LOG(WARNING, "Unable to determine indirect action type "
|
|
|
|
"without a mask specified");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
type = mask->type;
|
|
|
|
switch (type) {
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RSS:
|
|
|
|
at->actions_off[action_src] = *curr_off;
|
|
|
|
action_types[*curr_off] = MLX5DR_ACTION_TYP_TIR;
|
|
|
|
*curr_off = *curr_off + 1;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_COUNT:
|
|
|
|
at->actions_off[action_src] = *curr_off;
|
|
|
|
action_types[*curr_off] = MLX5DR_ACTION_TYP_CTR;
|
|
|
|
*curr_off = *curr_off + 1;
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_CONNTRACK:
|
|
|
|
at->actions_off[action_src] = *curr_off;
|
|
|
|
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT;
|
|
|
|
*curr_off = *curr_off + 1;
|
|
|
|
break;
|
2022-10-20 15:41:43 +00:00
|
|
|
default:
|
|
|
|
DRV_LOG(WARNING, "Unsupported shared action type: %d", type);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Create DR action template based on a provided sequence of flow actions.
|
|
|
|
*
|
|
|
|
* @param[in] at
|
|
|
|
* Pointer to flow actions template to be updated.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* DR action template pointer on success and action offsets in @p at are updated.
|
|
|
|
* NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct mlx5dr_action_template *
|
|
|
|
flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
struct mlx5dr_action_template *dr_template;
|
|
|
|
enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST };
|
|
|
|
unsigned int i;
|
|
|
|
uint16_t curr_off;
|
|
|
|
enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2;
|
|
|
|
uint16_t reformat_off = UINT16_MAX;
|
|
|
|
uint16_t mhdr_off = UINT16_MAX;
|
|
|
|
int ret;
|
|
|
|
for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) {
|
|
|
|
const struct rte_flow_action_raw_encap *raw_encap_data;
|
|
|
|
size_t data_size;
|
|
|
|
enum mlx5dr_action_type type;
|
|
|
|
|
|
|
|
if (curr_off >= MLX5_HW_MAX_ACTS)
|
|
|
|
goto err_actions_num;
|
|
|
|
switch (at->actions[i].type) {
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VOID:
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_INDIRECT:
|
|
|
|
ret = flow_hw_dr_actions_template_handle_shared(&at->masks[i], i,
|
|
|
|
action_types,
|
|
|
|
&curr_off, at);
|
|
|
|
if (ret)
|
|
|
|
return NULL;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
|
|
|
|
case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
|
|
|
|
MLX5_ASSERT(reformat_off == UINT16_MAX);
|
|
|
|
reformat_off = curr_off++;
|
|
|
|
reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type];
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
|
|
|
|
raw_encap_data = at->actions[i].conf;
|
|
|
|
data_size = raw_encap_data->size;
|
|
|
|
if (reformat_off != UINT16_MAX) {
|
|
|
|
reformat_act_type = data_size < MLX5_ENCAPSULATION_DECISION_SIZE ?
|
|
|
|
MLX5DR_ACTION_TYP_TNL_L3_TO_L2 :
|
|
|
|
MLX5DR_ACTION_TYP_L2_TO_TNL_L3;
|
|
|
|
} else {
|
|
|
|
reformat_off = curr_off++;
|
|
|
|
reformat_act_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
|
|
|
|
reformat_off = curr_off++;
|
|
|
|
reformat_act_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2;
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
|
|
|
|
if (mhdr_off == UINT16_MAX) {
|
|
|
|
mhdr_off = curr_off++;
|
|
|
|
type = mlx5_hw_dr_action_types[at->actions[i].type];
|
|
|
|
action_types[mhdr_off] = type;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_METER:
|
|
|
|
at->actions_off[i] = curr_off;
|
|
|
|
action_types[curr_off++] = MLX5DR_ACTION_TYP_ASO_METER;
|
|
|
|
if (curr_off >= MLX5_HW_MAX_ACTS)
|
|
|
|
goto err_actions_num;
|
|
|
|
action_types[curr_off++] = MLX5DR_ACTION_TYP_FT;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
type = mlx5_hw_dr_action_types[at->actions[i].type];
|
|
|
|
at->actions_off[i] = curr_off;
|
|
|
|
action_types[curr_off++] = type;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (curr_off >= MLX5_HW_MAX_ACTS)
|
|
|
|
goto err_actions_num;
|
|
|
|
if (mhdr_off != UINT16_MAX)
|
|
|
|
at->mhdr_off = mhdr_off;
|
|
|
|
if (reformat_off != UINT16_MAX) {
|
|
|
|
at->reformat_off = reformat_off;
|
|
|
|
action_types[reformat_off] = reformat_act_type;
|
|
|
|
}
|
|
|
|
dr_template = mlx5dr_action_template_create(action_types);
|
|
|
|
if (dr_template)
|
|
|
|
at->dr_actions_num = curr_off;
|
|
|
|
else
|
|
|
|
DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno);
|
|
|
|
return dr_template;
|
|
|
|
err_actions_num:
|
|
|
|
DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template",
|
|
|
|
curr_off, MLX5_HW_MAX_ACTS);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:43 +00:00
|
|
|
/**
|
|
|
|
* Create flow action template.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] attr
|
|
|
|
* Pointer to the action template attributes.
|
|
|
|
* @param[in] actions
|
|
|
|
* Associated actions (list terminated by the END action).
|
|
|
|
* @param[in] masks
|
|
|
|
* List of actions that marks which of the action's member is constant.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Action template pointer on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_actions_template *
|
|
|
|
flow_hw_actions_template_create(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_actions_template_attr *attr,
|
|
|
|
const struct rte_flow_action actions[],
|
|
|
|
const struct rte_flow_action masks[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:43 +00:00
|
|
|
int len, act_num, act_len, mask_len;
|
|
|
|
unsigned int i;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_actions_template *at = NULL;
|
|
|
|
uint16_t pos = MLX5_HW_MAX_ACTS;
|
|
|
|
struct rte_flow_action tmp_action[MLX5_HW_MAX_ACTS];
|
|
|
|
struct rte_flow_action tmp_mask[MLX5_HW_MAX_ACTS];
|
|
|
|
const struct rte_flow_action *ra;
|
|
|
|
const struct rte_flow_action *rm;
|
|
|
|
const struct rte_flow_action_modify_field rx_mreg = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_B,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_C_1,
|
|
|
|
},
|
|
|
|
.width = 32,
|
|
|
|
};
|
|
|
|
const struct rte_flow_action_modify_field rx_mreg_mask = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = UINT32_MAX,
|
|
|
|
.offset = UINT32_MAX,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = UINT32_MAX,
|
|
|
|
.offset = UINT32_MAX,
|
|
|
|
},
|
|
|
|
.width = UINT32_MAX,
|
|
|
|
};
|
|
|
|
const struct rte_flow_action rx_cpy = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &rx_mreg,
|
|
|
|
};
|
|
|
|
const struct rte_flow_action rx_cpy_mask = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &rx_mreg_mask,
|
|
|
|
};
|
2022-02-24 13:40:43 +00:00
|
|
|
|
2022-10-20 15:41:41 +00:00
|
|
|
if (flow_hw_actions_validate(dev, attr, actions, masks, error))
|
2022-10-20 15:41:38 +00:00
|
|
|
return NULL;
|
2022-10-20 15:41:40 +00:00
|
|
|
if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS &&
|
|
|
|
priv->sh->config.dv_esw_en) {
|
|
|
|
if (flow_hw_action_meta_copy_insert(actions, masks, &rx_cpy, &rx_cpy_mask,
|
|
|
|
tmp_action, tmp_mask, &pos)) {
|
|
|
|
rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
|
|
|
|
"Failed to concatenate new action/mask");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Application should make sure only one Q/RSS exist in one rule. */
|
|
|
|
if (pos == MLX5_HW_MAX_ACTS) {
|
|
|
|
ra = actions;
|
|
|
|
rm = masks;
|
|
|
|
} else {
|
|
|
|
ra = tmp_action;
|
|
|
|
rm = tmp_mask;
|
|
|
|
}
|
|
|
|
act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error);
|
2022-02-24 13:40:43 +00:00
|
|
|
if (act_len <= 0)
|
|
|
|
return NULL;
|
|
|
|
len = RTE_ALIGN(act_len, 16);
|
2022-10-20 15:41:40 +00:00
|
|
|
mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, rm, error);
|
2022-02-24 13:40:43 +00:00
|
|
|
if (mask_len <= 0)
|
|
|
|
return NULL;
|
|
|
|
len += RTE_ALIGN(mask_len, 16);
|
2022-10-20 15:41:43 +00:00
|
|
|
/* Count flow actions to allocate required space for storing DR offsets. */
|
|
|
|
act_num = 0;
|
|
|
|
for (i = 0; ra[i].type != RTE_FLOW_ACTION_TYPE_END; ++i)
|
|
|
|
act_num++;
|
|
|
|
len += RTE_ALIGN(act_num * sizeof(*at->actions_off), 16);
|
2022-10-20 15:41:40 +00:00
|
|
|
at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at),
|
|
|
|
RTE_CACHE_LINE_SIZE, rte_socket_id());
|
2022-02-24 13:40:43 +00:00
|
|
|
if (!at) {
|
|
|
|
rte_flow_error_set(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot allocate action template");
|
|
|
|
return NULL;
|
|
|
|
}
|
2022-10-20 15:41:43 +00:00
|
|
|
/* Actions part is in the first part. */
|
2022-02-24 13:40:43 +00:00
|
|
|
at->attr = *attr;
|
|
|
|
at->actions = (struct rte_flow_action *)(at + 1);
|
2022-10-20 15:41:40 +00:00
|
|
|
act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->actions,
|
|
|
|
len, ra, error);
|
2022-02-24 13:40:43 +00:00
|
|
|
if (act_len <= 0)
|
|
|
|
goto error;
|
2022-10-20 15:41:43 +00:00
|
|
|
/* Masks part is in the second part. */
|
2022-10-20 15:41:40 +00:00
|
|
|
at->masks = (struct rte_flow_action *)(((uint8_t *)at->actions) + act_len);
|
2022-02-24 13:40:43 +00:00
|
|
|
mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->masks,
|
2022-10-20 15:41:40 +00:00
|
|
|
len - act_len, rm, error);
|
2022-02-24 13:40:43 +00:00
|
|
|
if (mask_len <= 0)
|
|
|
|
goto error;
|
2022-10-20 15:41:43 +00:00
|
|
|
/* DR actions offsets in the third part. */
|
|
|
|
at->actions_off = (uint16_t *)((uint8_t *)at->masks + mask_len);
|
|
|
|
at->actions_num = act_num;
|
|
|
|
for (i = 0; i < at->actions_num; ++i)
|
|
|
|
at->actions_off[i] = UINT16_MAX;
|
|
|
|
at->reformat_off = UINT16_MAX;
|
|
|
|
at->mhdr_off = UINT16_MAX;
|
2022-10-20 15:41:40 +00:00
|
|
|
at->rx_cpy_pos = pos;
|
2022-02-24 13:40:43 +00:00
|
|
|
/*
|
|
|
|
* mlx5 PMD hacks indirect action index directly to the action conf.
|
|
|
|
* The rte_flow_conv() function copies the content from conf pointer.
|
|
|
|
* Need to restore the indirect action index from action conf here.
|
|
|
|
*/
|
|
|
|
for (i = 0; actions->type != RTE_FLOW_ACTION_TYPE_END;
|
|
|
|
actions++, masks++, i++) {
|
|
|
|
if (actions->type == RTE_FLOW_ACTION_TYPE_INDIRECT) {
|
|
|
|
at->actions[i].conf = actions->conf;
|
|
|
|
at->masks[i].conf = masks->conf;
|
|
|
|
}
|
|
|
|
}
|
2022-10-20 15:41:43 +00:00
|
|
|
at->tmpl = flow_hw_dr_actions_template_create(at);
|
|
|
|
if (!at->tmpl)
|
|
|
|
goto error;
|
2022-02-24 13:40:43 +00:00
|
|
|
__atomic_fetch_add(&at->refcnt, 1, __ATOMIC_RELAXED);
|
|
|
|
LIST_INSERT_HEAD(&priv->flow_hw_at, at, next);
|
|
|
|
return at;
|
|
|
|
error:
|
2022-10-20 15:41:43 +00:00
|
|
|
if (at) {
|
|
|
|
if (at->tmpl)
|
|
|
|
mlx5dr_action_template_destroy(at->tmpl);
|
2022-10-20 15:41:40 +00:00
|
|
|
mlx5_free(at);
|
2022-10-20 15:41:43 +00:00
|
|
|
}
|
2022-02-24 13:40:43 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroy flow action template.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] template
|
|
|
|
* Pointer to the action template to be destroyed.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, a negative errno value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_actions_template_destroy(struct rte_eth_dev *dev __rte_unused,
|
|
|
|
struct rte_flow_actions_template *template,
|
|
|
|
struct rte_flow_error *error __rte_unused)
|
|
|
|
{
|
|
|
|
if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) {
|
|
|
|
DRV_LOG(WARNING, "Action template %p is still in use.",
|
|
|
|
(void *)template);
|
|
|
|
return rte_flow_error_set(error, EBUSY,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"action template in using");
|
|
|
|
}
|
|
|
|
LIST_REMOVE(template, next);
|
2022-10-20 15:41:43 +00:00
|
|
|
if (template->tmpl)
|
|
|
|
mlx5dr_action_template_destroy(template->tmpl);
|
2022-02-24 13:40:43 +00:00
|
|
|
mlx5_free(template);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
static struct rte_flow_item *
|
|
|
|
flow_hw_copy_prepend_port_item(const struct rte_flow_item *items,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
const struct rte_flow_item *curr_item;
|
|
|
|
struct rte_flow_item *copied_items;
|
|
|
|
bool found_end;
|
|
|
|
unsigned int nb_items;
|
|
|
|
unsigned int i;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
/* Count number of pattern items. */
|
|
|
|
nb_items = 0;
|
|
|
|
found_end = false;
|
|
|
|
for (curr_item = items; !found_end; ++curr_item) {
|
|
|
|
++nb_items;
|
|
|
|
if (curr_item->type == RTE_FLOW_ITEM_TYPE_END)
|
|
|
|
found_end = true;
|
|
|
|
}
|
|
|
|
/* Allocate new array of items and prepend REPRESENTED_PORT item. */
|
|
|
|
size = sizeof(*copied_items) * (nb_items + 1);
|
|
|
|
copied_items = mlx5_malloc(MLX5_MEM_ZERO, size, 0, rte_socket_id());
|
|
|
|
if (!copied_items) {
|
|
|
|
rte_flow_error_set(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot allocate item template");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
copied_items[0] = (struct rte_flow_item){
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
|
|
|
|
.spec = NULL,
|
|
|
|
.last = NULL,
|
|
|
|
.mask = &rte_flow_item_ethdev_mask,
|
|
|
|
};
|
|
|
|
for (i = 1; i < nb_items + 1; ++i)
|
|
|
|
copied_items[i] = items[i - 1];
|
|
|
|
return copied_items;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
static int
|
|
|
|
flow_hw_pattern_validate(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_pattern_template_attr *attr,
|
|
|
|
const struct rte_flow_item items[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
2022-10-20 15:41:43 +00:00
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:40 +00:00
|
|
|
int i;
|
|
|
|
bool items_end = false;
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
if (!attr->ingress && !attr->egress && !attr->transfer)
|
|
|
|
return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL,
|
|
|
|
"at least one of the direction attributes"
|
|
|
|
" must be specified");
|
|
|
|
if (priv->sh->config.dv_esw_en) {
|
|
|
|
MLX5_ASSERT(priv->master || priv->representor);
|
|
|
|
if (priv->master) {
|
|
|
|
/*
|
|
|
|
* It is allowed to specify ingress, egress and transfer attributes
|
|
|
|
* at the same time, in order to construct flows catching all missed
|
|
|
|
* FDB traffic and forwarding it to the master port.
|
|
|
|
*/
|
|
|
|
if (!(attr->ingress ^ attr->egress ^ attr->transfer))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
|
|
|
|
"only one or all direction attributes"
|
|
|
|
" at once can be used on transfer proxy"
|
|
|
|
" port");
|
|
|
|
} else {
|
|
|
|
if (attr->transfer)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, NULL,
|
|
|
|
"transfer attribute cannot be used with"
|
|
|
|
" port representors");
|
|
|
|
if (attr->ingress && attr->egress)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
|
|
|
|
"ingress and egress direction attributes"
|
|
|
|
" cannot be used at the same time on"
|
|
|
|
" port representors");
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (attr->transfer)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, NULL,
|
|
|
|
"transfer attribute cannot be used when"
|
|
|
|
" E-Switch is disabled");
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
for (i = 0; !items_end; i++) {
|
|
|
|
int type = items[i].type;
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case RTE_FLOW_ITEM_TYPE_TAG:
|
|
|
|
{
|
|
|
|
int reg;
|
|
|
|
const struct rte_flow_item_tag *tag =
|
|
|
|
(const struct rte_flow_item_tag *)items[i].spec;
|
|
|
|
|
|
|
|
reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, tag->index);
|
|
|
|
if (reg == REG_NON)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Unsupported tag index");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
|
|
|
|
{
|
|
|
|
const struct rte_flow_item_tag *tag =
|
|
|
|
(const struct rte_flow_item_tag *)items[i].spec;
|
|
|
|
uint8_t regcs = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c;
|
|
|
|
|
|
|
|
if (!((1 << (tag->index - REG_C_0)) & regcs))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Unsupported internal tag index");
|
2022-10-20 15:41:43 +00:00
|
|
|
break;
|
2022-10-20 15:41:40 +00:00
|
|
|
}
|
2022-10-20 15:41:43 +00:00
|
|
|
case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
|
|
|
|
if (attr->ingress || attr->egress)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ITEM, NULL,
|
|
|
|
"represented port item cannot be used"
|
|
|
|
" when transfer attribute is set");
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_META:
|
|
|
|
if (!priv->sh->config.dv_esw_en ||
|
|
|
|
priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_META32_HWS) {
|
|
|
|
if (attr->ingress)
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ITEM, NULL,
|
|
|
|
"META item is not supported"
|
|
|
|
" on current FW with ingress"
|
|
|
|
" attribute");
|
|
|
|
}
|
|
|
|
break;
|
2022-10-20 15:41:40 +00:00
|
|
|
case RTE_FLOW_ITEM_TYPE_VOID:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_ETH:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_VLAN:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_IPV4:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_IPV6:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_UDP:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_TCP:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GTP:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GTP_PSC:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_VXLAN:
|
|
|
|
case MLX5_RTE_FLOW_ITEM_TYPE_SQ:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GRE:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GRE_KEY:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_GRE_OPTION:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_ICMP:
|
|
|
|
case RTE_FLOW_ITEM_TYPE_ICMP6:
|
2022-10-20 15:41:44 +00:00
|
|
|
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
|
2022-10-20 15:41:40 +00:00
|
|
|
break;
|
|
|
|
case RTE_FLOW_ITEM_TYPE_END:
|
|
|
|
items_end = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Unsupported item type");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:41 +00:00
|
|
|
/**
|
2022-02-24 13:40:42 +00:00
|
|
|
* Create flow item template.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] attr
|
|
|
|
* Pointer to the item template attributes.
|
|
|
|
* @param[in] items
|
|
|
|
* The template item pattern.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Item template pointer on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_pattern_template *
|
|
|
|
flow_hw_pattern_template_create(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_pattern_template_attr *attr,
|
|
|
|
const struct rte_flow_item items[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct rte_flow_pattern_template *it;
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_item *copied_items = NULL;
|
|
|
|
const struct rte_flow_item *tmpl_items;
|
2022-02-24 13:40:42 +00:00
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
if (flow_hw_pattern_validate(dev, attr, items, error))
|
|
|
|
return NULL;
|
2022-10-20 15:41:43 +00:00
|
|
|
if (priv->sh->config.dv_esw_en && attr->ingress && !attr->egress && !attr->transfer) {
|
2022-10-20 15:41:39 +00:00
|
|
|
copied_items = flow_hw_copy_prepend_port_item(items, error);
|
|
|
|
if (!copied_items)
|
|
|
|
return NULL;
|
|
|
|
tmpl_items = copied_items;
|
|
|
|
} else {
|
|
|
|
tmpl_items = items;
|
|
|
|
}
|
2022-02-24 13:40:42 +00:00
|
|
|
it = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*it), 0, rte_socket_id());
|
|
|
|
if (!it) {
|
2022-10-20 15:41:39 +00:00
|
|
|
if (copied_items)
|
|
|
|
mlx5_free(copied_items);
|
2022-02-24 13:40:42 +00:00
|
|
|
rte_flow_error_set(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot allocate item template");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
it->attr = *attr;
|
2022-10-20 15:41:39 +00:00
|
|
|
it->mt = mlx5dr_match_template_create(tmpl_items, attr->relaxed_matching);
|
2022-02-24 13:40:42 +00:00
|
|
|
if (!it->mt) {
|
2022-10-20 15:41:39 +00:00
|
|
|
if (copied_items)
|
|
|
|
mlx5_free(copied_items);
|
2022-02-24 13:40:42 +00:00
|
|
|
mlx5_free(it);
|
|
|
|
rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot create match template");
|
|
|
|
return NULL;
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
it->item_flags = flow_hw_rss_item_flags_get(tmpl_items);
|
|
|
|
it->implicit_port = !!copied_items;
|
2022-02-24 13:40:42 +00:00
|
|
|
__atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED);
|
|
|
|
LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next);
|
2022-10-20 15:41:39 +00:00
|
|
|
if (copied_items)
|
|
|
|
mlx5_free(copied_items);
|
2022-02-24 13:40:42 +00:00
|
|
|
return it;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroy flow item template.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] template
|
|
|
|
* Pointer to the item template to be destroyed.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, a negative errno value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused,
|
|
|
|
struct rte_flow_pattern_template *template,
|
|
|
|
struct rte_flow_error *error __rte_unused)
|
|
|
|
{
|
|
|
|
if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) {
|
|
|
|
DRV_LOG(WARNING, "Item template %p is still in use.",
|
|
|
|
(void *)template);
|
|
|
|
return rte_flow_error_set(error, EBUSY,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"item template in using");
|
|
|
|
}
|
|
|
|
LIST_REMOVE(template, next);
|
|
|
|
claim_zero(mlx5dr_match_template_destroy(template->mt));
|
|
|
|
mlx5_free(template);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2022-02-24 13:40:41 +00:00
|
|
|
* Get information about HWS pre-configurable resources.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[out] port_info
|
|
|
|
* Pointer to port information.
|
|
|
|
* @param[out] queue_info
|
|
|
|
* Pointer to queue information.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, a negative errno value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
2022-10-20 15:41:41 +00:00
|
|
|
flow_hw_info_get(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_port_info *port_info,
|
|
|
|
struct rte_flow_queue_info *queue_info,
|
2022-02-24 13:40:41 +00:00
|
|
|
struct rte_flow_error *error __rte_unused)
|
|
|
|
{
|
2022-10-20 15:41:41 +00:00
|
|
|
uint16_t port_id = dev->data->port_id;
|
|
|
|
struct rte_mtr_capabilities mtr_cap;
|
|
|
|
int ret;
|
|
|
|
|
2022-02-24 13:40:41 +00:00
|
|
|
memset(port_info, 0, sizeof(*port_info));
|
|
|
|
/* Queue size is unlimited from low-level. */
|
2022-10-20 15:41:41 +00:00
|
|
|
port_info->max_nb_queues = UINT32_MAX;
|
2022-02-24 13:40:41 +00:00
|
|
|
queue_info->max_size = UINT32_MAX;
|
2022-10-20 15:41:41 +00:00
|
|
|
|
|
|
|
memset(&mtr_cap, 0, sizeof(struct rte_mtr_capabilities));
|
|
|
|
ret = rte_mtr_capabilities_get(port_id, &mtr_cap, NULL);
|
|
|
|
if (!ret)
|
|
|
|
port_info->max_nb_meters = mtr_cap.n_max;
|
2022-02-24 13:40:41 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:44 +00:00
|
|
|
/**
|
|
|
|
* Create group callback.
|
|
|
|
*
|
|
|
|
* @param[in] tool_ctx
|
|
|
|
* Pointer to the hash list related context.
|
|
|
|
* @param[in] cb_ctx
|
|
|
|
* Pointer to the group creation context.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Group entry on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
struct mlx5_list_entry *
|
|
|
|
flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx)
|
|
|
|
{
|
|
|
|
struct mlx5_dev_ctx_shared *sh = tool_ctx;
|
|
|
|
struct mlx5_flow_cb_ctx *ctx = cb_ctx;
|
|
|
|
struct rte_eth_dev *dev = ctx->dev;
|
|
|
|
struct rte_flow_attr *attr = (struct rte_flow_attr *)ctx->data;
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5dr_table_attr dr_tbl_attr = {0};
|
|
|
|
struct rte_flow_error *error = ctx->error;
|
|
|
|
struct mlx5_flow_group *grp_data;
|
|
|
|
struct mlx5dr_table *tbl = NULL;
|
|
|
|
struct mlx5dr_action *jump;
|
|
|
|
uint32_t idx = 0;
|
|
|
|
|
|
|
|
grp_data = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_HW_GRP], &idx);
|
|
|
|
if (!grp_data) {
|
|
|
|
rte_flow_error_set(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot allocate flow table data entry");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
dr_tbl_attr.level = attr->group;
|
|
|
|
if (attr->transfer)
|
|
|
|
dr_tbl_attr.type = MLX5DR_TABLE_TYPE_FDB;
|
|
|
|
else if (attr->egress)
|
|
|
|
dr_tbl_attr.type = MLX5DR_TABLE_TYPE_NIC_TX;
|
|
|
|
else
|
|
|
|
dr_tbl_attr.type = MLX5DR_TABLE_TYPE_NIC_RX;
|
|
|
|
tbl = mlx5dr_table_create(priv->dr_ctx, &dr_tbl_attr);
|
|
|
|
if (!tbl)
|
|
|
|
goto error;
|
|
|
|
grp_data->tbl = tbl;
|
|
|
|
if (attr->group) {
|
|
|
|
/* Jump action be used by non-root table. */
|
|
|
|
jump = mlx5dr_action_create_dest_table
|
|
|
|
(priv->dr_ctx, tbl,
|
|
|
|
mlx5_hw_act_flag[!!attr->group][dr_tbl_attr.type]);
|
|
|
|
if (!jump)
|
|
|
|
goto error;
|
|
|
|
grp_data->jump.hws_action = jump;
|
|
|
|
/* Jump action be used by root table. */
|
|
|
|
jump = mlx5dr_action_create_dest_table
|
|
|
|
(priv->dr_ctx, tbl,
|
|
|
|
mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_ROOT]
|
|
|
|
[dr_tbl_attr.type]);
|
|
|
|
if (!jump)
|
|
|
|
goto error;
|
|
|
|
grp_data->jump.root_action = jump;
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
grp_data->dev = dev;
|
2022-02-24 13:40:44 +00:00
|
|
|
grp_data->idx = idx;
|
|
|
|
grp_data->group_id = attr->group;
|
|
|
|
grp_data->type = dr_tbl_attr.type;
|
|
|
|
return &grp_data->entry;
|
|
|
|
error:
|
|
|
|
if (grp_data->jump.root_action)
|
|
|
|
mlx5dr_action_destroy(grp_data->jump.root_action);
|
|
|
|
if (grp_data->jump.hws_action)
|
|
|
|
mlx5dr_action_destroy(grp_data->jump.hws_action);
|
|
|
|
if (tbl)
|
|
|
|
mlx5dr_table_destroy(tbl);
|
|
|
|
if (idx)
|
|
|
|
mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], idx);
|
|
|
|
rte_flow_error_set(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot allocate flow dr table");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Remove group callback.
|
|
|
|
*
|
|
|
|
* @param[in] tool_ctx
|
|
|
|
* Pointer to the hash list related context.
|
|
|
|
* @param[in] entry
|
|
|
|
* Pointer to the entry to be removed.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
flow_hw_grp_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry)
|
|
|
|
{
|
|
|
|
struct mlx5_dev_ctx_shared *sh = tool_ctx;
|
|
|
|
struct mlx5_flow_group *grp_data =
|
|
|
|
container_of(entry, struct mlx5_flow_group, entry);
|
|
|
|
|
|
|
|
MLX5_ASSERT(entry && sh);
|
|
|
|
/* To use the wrapper glue functions instead. */
|
|
|
|
if (grp_data->jump.hws_action)
|
|
|
|
mlx5dr_action_destroy(grp_data->jump.hws_action);
|
|
|
|
if (grp_data->jump.root_action)
|
|
|
|
mlx5dr_action_destroy(grp_data->jump.root_action);
|
|
|
|
mlx5dr_table_destroy(grp_data->tbl);
|
|
|
|
mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], grp_data->idx);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Match group callback.
|
|
|
|
*
|
|
|
|
* @param[in] tool_ctx
|
|
|
|
* Pointer to the hash list related context.
|
|
|
|
* @param[in] entry
|
|
|
|
* Pointer to the group to be matched.
|
|
|
|
* @param[in] cb_ctx
|
|
|
|
* Pointer to the group matching context.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on matched, 1 on miss matched.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
flow_hw_grp_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry,
|
|
|
|
void *cb_ctx)
|
|
|
|
{
|
|
|
|
struct mlx5_flow_cb_ctx *ctx = cb_ctx;
|
|
|
|
struct mlx5_flow_group *grp_data =
|
|
|
|
container_of(entry, struct mlx5_flow_group, entry);
|
|
|
|
struct rte_flow_attr *attr =
|
|
|
|
(struct rte_flow_attr *)ctx->data;
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
return (grp_data->dev != ctx->dev) ||
|
|
|
|
(grp_data->group_id != attr->group) ||
|
2022-02-24 13:40:44 +00:00
|
|
|
((grp_data->type != MLX5DR_TABLE_TYPE_FDB) &&
|
|
|
|
attr->transfer) ||
|
|
|
|
((grp_data->type != MLX5DR_TABLE_TYPE_NIC_TX) &&
|
|
|
|
attr->egress) ||
|
|
|
|
((grp_data->type != MLX5DR_TABLE_TYPE_NIC_RX) &&
|
|
|
|
attr->ingress);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Clone group entry callback.
|
|
|
|
*
|
|
|
|
* @param[in] tool_ctx
|
|
|
|
* Pointer to the hash list related context.
|
|
|
|
* @param[in] entry
|
|
|
|
* Pointer to the group to be matched.
|
|
|
|
* @param[in] cb_ctx
|
|
|
|
* Pointer to the group matching context.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on matched, 1 on miss matched.
|
|
|
|
*/
|
|
|
|
struct mlx5_list_entry *
|
|
|
|
flow_hw_grp_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry,
|
|
|
|
void *cb_ctx)
|
|
|
|
{
|
|
|
|
struct mlx5_dev_ctx_shared *sh = tool_ctx;
|
|
|
|
struct mlx5_flow_cb_ctx *ctx = cb_ctx;
|
|
|
|
struct mlx5_flow_group *grp_data;
|
|
|
|
struct rte_flow_error *error = ctx->error;
|
|
|
|
uint32_t idx = 0;
|
|
|
|
|
|
|
|
grp_data = mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_HW_GRP], &idx);
|
|
|
|
if (!grp_data) {
|
|
|
|
rte_flow_error_set(error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"cannot allocate flow table data entry");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
memcpy(grp_data, oentry, sizeof(*grp_data));
|
|
|
|
grp_data->idx = idx;
|
|
|
|
return &grp_data->entry;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Free cloned group entry callback.
|
|
|
|
*
|
|
|
|
* @param[in] tool_ctx
|
|
|
|
* Pointer to the hash list related context.
|
|
|
|
* @param[in] entry
|
|
|
|
* Pointer to the group to be freed.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
flow_hw_grp_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry)
|
|
|
|
{
|
|
|
|
struct mlx5_dev_ctx_shared *sh = tool_ctx;
|
|
|
|
struct mlx5_flow_group *grp_data =
|
|
|
|
container_of(entry, struct mlx5_flow_group, entry);
|
|
|
|
|
|
|
|
mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], grp_data->idx);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/**
|
|
|
|
* Create and cache a vport action for given @p dev port. vport actions
|
|
|
|
* cache is used in HWS with FDB flows.
|
|
|
|
*
|
|
|
|
* This function does not create any function if proxy port for @p dev port
|
|
|
|
* was not configured for HW Steering.
|
|
|
|
*
|
|
|
|
* This function assumes that E-Switch is enabled and PMD is running with
|
|
|
|
* HW Steering configured.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device which will be the action destination.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, positive value otherwise.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
flow_hw_create_vport_action(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct rte_eth_dev *proxy_dev;
|
|
|
|
struct mlx5_priv *proxy_priv;
|
|
|
|
uint16_t port_id = dev->data->port_id;
|
|
|
|
uint16_t proxy_port_id = port_id;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = mlx5_flow_pick_transfer_proxy(dev, &proxy_port_id, NULL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
proxy_dev = &rte_eth_devices[proxy_port_id];
|
|
|
|
proxy_priv = proxy_dev->data->dev_private;
|
|
|
|
if (!proxy_priv->hw_vport)
|
|
|
|
return 0;
|
|
|
|
if (proxy_priv->hw_vport[port_id]) {
|
|
|
|
DRV_LOG(ERR, "port %u HWS vport action already created",
|
|
|
|
port_id);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
proxy_priv->hw_vport[port_id] = mlx5dr_action_create_dest_vport
|
|
|
|
(proxy_priv->dr_ctx, priv->dev_port,
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_FDB);
|
|
|
|
if (!proxy_priv->hw_vport[port_id]) {
|
|
|
|
DRV_LOG(ERR, "port %u unable to create HWS vport action",
|
|
|
|
port_id);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroys the vport action associated with @p dev device
|
|
|
|
* from actions' cache.
|
|
|
|
*
|
|
|
|
* This function does not destroy any action if there is no action cached
|
|
|
|
* for @p dev or proxy port was not configured for HW Steering.
|
|
|
|
*
|
|
|
|
* This function assumes that E-Switch is enabled and PMD is running with
|
|
|
|
* HW Steering configured.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device which will be the action destination.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
flow_hw_destroy_vport_action(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct rte_eth_dev *proxy_dev;
|
|
|
|
struct mlx5_priv *proxy_priv;
|
|
|
|
uint16_t port_id = dev->data->port_id;
|
|
|
|
uint16_t proxy_port_id = port_id;
|
|
|
|
|
|
|
|
if (mlx5_flow_pick_transfer_proxy(dev, &proxy_port_id, NULL))
|
|
|
|
return;
|
|
|
|
proxy_dev = &rte_eth_devices[proxy_port_id];
|
|
|
|
proxy_priv = proxy_dev->data->dev_private;
|
|
|
|
if (!proxy_priv->hw_vport || !proxy_priv->hw_vport[port_id])
|
|
|
|
return;
|
|
|
|
mlx5dr_action_destroy(proxy_priv->hw_vport[port_id]);
|
|
|
|
proxy_priv->hw_vport[port_id] = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_create_vport_actions(struct mlx5_priv *priv)
|
|
|
|
{
|
|
|
|
uint16_t port_id;
|
|
|
|
|
|
|
|
MLX5_ASSERT(!priv->hw_vport);
|
|
|
|
priv->hw_vport = mlx5_malloc(MLX5_MEM_ZERO,
|
|
|
|
sizeof(*priv->hw_vport) * RTE_MAX_ETHPORTS,
|
|
|
|
0, SOCKET_ID_ANY);
|
|
|
|
if (!priv->hw_vport)
|
|
|
|
return -ENOMEM;
|
|
|
|
DRV_LOG(DEBUG, "port %u :: creating vport actions", priv->dev_data->port_id);
|
|
|
|
DRV_LOG(DEBUG, "port %u :: domain_id=%u", priv->dev_data->port_id, priv->domain_id);
|
|
|
|
MLX5_ETH_FOREACH_DEV(port_id, NULL) {
|
|
|
|
struct mlx5_priv *port_priv = rte_eth_devices[port_id].data->dev_private;
|
|
|
|
|
|
|
|
if (!port_priv ||
|
|
|
|
port_priv->domain_id != priv->domain_id)
|
|
|
|
continue;
|
|
|
|
DRV_LOG(DEBUG, "port %u :: for port_id=%u, calling mlx5dr_action_create_dest_vport() with ibport=%u",
|
|
|
|
priv->dev_data->port_id, port_id, port_priv->dev_port);
|
|
|
|
priv->hw_vport[port_id] = mlx5dr_action_create_dest_vport
|
|
|
|
(priv->dr_ctx, port_priv->dev_port,
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_FDB);
|
|
|
|
DRV_LOG(DEBUG, "port %u :: priv->hw_vport[%u]=%p",
|
|
|
|
priv->dev_data->port_id, port_id, (void *)priv->hw_vport[port_id]);
|
|
|
|
if (!priv->hw_vport[port_id])
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
flow_hw_free_vport_actions(struct mlx5_priv *priv)
|
|
|
|
{
|
|
|
|
uint16_t port_id;
|
|
|
|
|
|
|
|
if (!priv->hw_vport)
|
|
|
|
return;
|
|
|
|
for (port_id = 0; port_id < RTE_MAX_ETHPORTS; ++port_id)
|
|
|
|
if (priv->hw_vport[port_id])
|
|
|
|
mlx5dr_action_destroy(priv->hw_vport[port_id]);
|
|
|
|
mlx5_free(priv->hw_vport);
|
|
|
|
priv->hw_vport = NULL;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
static uint32_t
|
|
|
|
flow_hw_usable_lsb_vport_mask(struct mlx5_priv *priv)
|
|
|
|
{
|
|
|
|
uint32_t usable_mask = ~priv->vport_meta_mask;
|
|
|
|
|
|
|
|
if (usable_mask)
|
|
|
|
return (1 << rte_bsf32(usable_mask));
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/**
|
|
|
|
* Creates a flow pattern template used to match on E-Switch Manager.
|
|
|
|
* This template is used to set up a table for SQ miss default flow.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow pattern template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_pattern_template *
|
|
|
|
flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct rte_flow_pattern_template_attr attr = {
|
|
|
|
.relaxed_matching = 0,
|
|
|
|
.transfer = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_ethdev port_spec = {
|
|
|
|
.port_id = MLX5_REPRESENTED_PORT_ESW_MGR,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_ethdev port_mask = {
|
|
|
|
.port_id = UINT16_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_item items[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
|
|
|
|
.spec = &port_spec,
|
|
|
|
.mask = &port_mask,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
return flow_hw_pattern_template_create(dev, &attr, items, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2022-10-20 15:41:40 +00:00
|
|
|
* Creates a flow pattern template used to match REG_C_0 and a TX queue.
|
|
|
|
* Matching on REG_C_0 is set up to match on least significant bit usable
|
|
|
|
* by user-space, which is set when packet was originated from E-Switch Manager.
|
|
|
|
*
|
2022-10-20 15:41:39 +00:00
|
|
|
* This template is used to set up a table for SQ miss default flow.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow pattern template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_pattern_template *
|
2022-10-20 15:41:40 +00:00
|
|
|
flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev)
|
2022-10-20 15:41:39 +00:00
|
|
|
{
|
2022-10-20 15:41:40 +00:00
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv);
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_pattern_template_attr attr = {
|
|
|
|
.relaxed_matching = 0,
|
|
|
|
.transfer = 1,
|
|
|
|
};
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_item_tag reg_c0_spec = {
|
|
|
|
.index = (uint8_t)REG_C_0,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_tag reg_c0_mask = {
|
|
|
|
.index = 0xff,
|
|
|
|
};
|
2022-10-20 15:41:39 +00:00
|
|
|
struct mlx5_rte_flow_item_sq queue_mask = {
|
|
|
|
.queue = UINT32_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_item items[] = {
|
2022-10-20 15:41:40 +00:00
|
|
|
{
|
|
|
|
.type = (enum rte_flow_item_type)
|
|
|
|
MLX5_RTE_FLOW_ITEM_TYPE_TAG,
|
|
|
|
.spec = ®_c0_spec,
|
|
|
|
.mask = ®_c0_mask,
|
|
|
|
},
|
2022-10-20 15:41:39 +00:00
|
|
|
{
|
|
|
|
.type = (enum rte_flow_item_type)
|
|
|
|
MLX5_RTE_FLOW_ITEM_TYPE_SQ,
|
|
|
|
.mask = &queue_mask,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
if (!marker_bit) {
|
|
|
|
DRV_LOG(ERR, "Unable to set up pattern template for SQ miss table");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
reg_c0_spec.data = marker_bit;
|
|
|
|
reg_c0_mask.data = marker_bit;
|
2022-10-20 15:41:39 +00:00
|
|
|
return flow_hw_pattern_template_create(dev, &attr, items, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Creates a flow pattern template with unmasked represented port matching.
|
|
|
|
* This template is used to set up a table for default transfer flows
|
|
|
|
* directing packets to group 1.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow pattern template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_pattern_template *
|
|
|
|
flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct rte_flow_pattern_template_attr attr = {
|
|
|
|
.relaxed_matching = 0,
|
|
|
|
.transfer = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_ethdev port_mask = {
|
|
|
|
.port_id = UINT16_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_item items[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
|
|
|
|
.mask = &port_mask,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
return flow_hw_pattern_template_create(dev, &attr, items, NULL);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
/*
|
|
|
|
* Creating a flow pattern template with all ETH packets matching.
|
|
|
|
* This template is used to set up a table for default Tx copy (Tx metadata
|
|
|
|
* to REG_C_1) flow rule usage.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow pattern template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_pattern_template *
|
|
|
|
flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct rte_flow_pattern_template_attr tx_pa_attr = {
|
|
|
|
.relaxed_matching = 0,
|
|
|
|
.egress = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_eth promisc = {
|
|
|
|
.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
|
|
|
|
.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
|
|
|
|
.type = 0,
|
|
|
|
};
|
|
|
|
struct rte_flow_item eth_all[] = {
|
|
|
|
[0] = {
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_ETH,
|
|
|
|
.spec = &promisc,
|
|
|
|
.mask = &promisc,
|
|
|
|
},
|
|
|
|
[1] = {
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
struct rte_flow_error drop_err;
|
|
|
|
|
|
|
|
RTE_SET_USED(drop_err);
|
|
|
|
return flow_hw_pattern_template_create(dev, &tx_pa_attr, eth_all, &drop_err);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Creates a flow actions template with modify field action and masked jump action.
|
|
|
|
* Modify field action sets the least significant bit of REG_C_0 (usable by user-space)
|
|
|
|
* to 1, meaning that packet was originated from E-Switch Manager. Jump action
|
|
|
|
* transfers steering to group 1.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow actions template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_actions_template *
|
|
|
|
flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv);
|
|
|
|
uint32_t marker_bit_mask = UINT32_MAX;
|
|
|
|
struct rte_flow_actions_template_attr attr = {
|
|
|
|
.transfer = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_modify_field set_reg_v = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_C_0,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = RTE_FLOW_FIELD_VALUE,
|
|
|
|
},
|
|
|
|
.width = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_modify_field set_reg_m = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = UINT32_MAX,
|
|
|
|
.offset = UINT32_MAX,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = RTE_FLOW_FIELD_VALUE,
|
|
|
|
},
|
|
|
|
.width = UINT32_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_jump jump_v = {
|
|
|
|
.group = MLX5_HW_LOWEST_USABLE_GROUP,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_jump jump_m = {
|
|
|
|
.group = UINT32_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions_v[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &set_reg_v,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_JUMP,
|
|
|
|
.conf = &jump_v,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions_m[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &set_reg_m,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_JUMP,
|
|
|
|
.conf = &jump_m,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
if (!marker_bit) {
|
|
|
|
DRV_LOG(ERR, "Unable to set up actions template for SQ miss table");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
set_reg_v.dst.offset = rte_bsf32(marker_bit);
|
|
|
|
rte_memcpy(set_reg_v.src.value, &marker_bit, sizeof(marker_bit));
|
|
|
|
rte_memcpy(set_reg_m.src.value, &marker_bit_mask, sizeof(marker_bit_mask));
|
|
|
|
return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, NULL);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/**
|
|
|
|
* Creates a flow actions template with an unmasked JUMP action. Flows
|
|
|
|
* based on this template will perform a jump to some group. This template
|
|
|
|
* is used to set up tables for control flows.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param group
|
|
|
|
* Destination group for this action template.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow actions template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_actions_template *
|
|
|
|
flow_hw_create_ctrl_jump_actions_template(struct rte_eth_dev *dev,
|
|
|
|
uint32_t group)
|
|
|
|
{
|
|
|
|
struct rte_flow_actions_template_attr attr = {
|
|
|
|
.transfer = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_jump jump_v = {
|
|
|
|
.group = group,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_jump jump_m = {
|
|
|
|
.group = UINT32_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions_v[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_JUMP,
|
|
|
|
.conf = &jump_v,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions_m[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_JUMP,
|
|
|
|
.conf = &jump_m,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m,
|
|
|
|
NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Creates a flow action template with a unmasked REPRESENTED_PORT action.
|
|
|
|
* It is used to create control flow tables.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow action template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_actions_template *
|
|
|
|
flow_hw_create_ctrl_port_actions_template(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct rte_flow_actions_template_attr attr = {
|
|
|
|
.transfer = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_action_ethdev port_v = {
|
|
|
|
.port_id = 0,
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions_v[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
|
|
|
|
.conf = &port_v,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
struct rte_flow_action_ethdev port_m = {
|
|
|
|
.port_id = 0,
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions_m[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
|
|
|
|
.conf = &port_m,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m,
|
|
|
|
NULL);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
/*
|
|
|
|
* Creating an actions template to use header modify action for register
|
|
|
|
* copying. This template is used to set up a table for copy flow.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow actions template on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_actions_template *
|
|
|
|
flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct rte_flow_actions_template_attr tx_act_attr = {
|
|
|
|
.egress = 1,
|
|
|
|
};
|
|
|
|
const struct rte_flow_action_modify_field mreg_action = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_C_1,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_A,
|
|
|
|
},
|
|
|
|
.width = 32,
|
|
|
|
};
|
|
|
|
const struct rte_flow_action_modify_field mreg_mask = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = UINT32_MAX,
|
|
|
|
.offset = UINT32_MAX,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = UINT32_MAX,
|
|
|
|
.offset = UINT32_MAX,
|
|
|
|
},
|
|
|
|
.width = UINT32_MAX,
|
|
|
|
};
|
|
|
|
const struct rte_flow_action copy_reg_action[] = {
|
|
|
|
[0] = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &mreg_action,
|
|
|
|
},
|
|
|
|
[1] = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
const struct rte_flow_action copy_reg_mask[] = {
|
|
|
|
[0] = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &mreg_mask,
|
|
|
|
},
|
|
|
|
[1] = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
struct rte_flow_error drop_err;
|
|
|
|
|
|
|
|
RTE_SET_USED(drop_err);
|
|
|
|
return flow_hw_actions_template_create(dev, &tx_act_attr, copy_reg_action,
|
|
|
|
copy_reg_mask, &drop_err);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/**
|
|
|
|
* Creates a control flow table used to transfer traffic from E-Switch Manager
|
|
|
|
* and TX queues from group 0 to group 1.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param it
|
|
|
|
* Pointer to flow pattern template.
|
|
|
|
* @param at
|
|
|
|
* Pointer to flow actions template.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow table on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_template_table*
|
|
|
|
flow_hw_create_ctrl_sq_miss_root_table(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_pattern_template *it,
|
|
|
|
struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
struct rte_flow_template_table_attr attr = {
|
|
|
|
.flow_attr = {
|
|
|
|
.group = 0,
|
|
|
|
.priority = 0,
|
|
|
|
.ingress = 0,
|
|
|
|
.egress = 0,
|
|
|
|
.transfer = 1,
|
|
|
|
},
|
|
|
|
.nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES,
|
|
|
|
};
|
2022-10-20 15:41:40 +00:00
|
|
|
struct mlx5_flow_template_table_cfg cfg = {
|
|
|
|
.attr = attr,
|
|
|
|
.external = false,
|
|
|
|
};
|
2022-10-20 15:41:39 +00:00
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, NULL);
|
2022-10-20 15:41:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Creates a control flow table used to transfer traffic from E-Switch Manager
|
|
|
|
* and TX queues from group 0 to group 1.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param it
|
|
|
|
* Pointer to flow pattern template.
|
|
|
|
* @param at
|
|
|
|
* Pointer to flow actions template.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow table on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_template_table*
|
|
|
|
flow_hw_create_ctrl_sq_miss_table(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_pattern_template *it,
|
|
|
|
struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
struct rte_flow_template_table_attr attr = {
|
|
|
|
.flow_attr = {
|
2022-10-20 15:41:40 +00:00
|
|
|
.group = 1,
|
|
|
|
.priority = MLX5_HW_LOWEST_PRIO_NON_ROOT,
|
2022-10-20 15:41:39 +00:00
|
|
|
.ingress = 0,
|
|
|
|
.egress = 0,
|
|
|
|
.transfer = 1,
|
|
|
|
},
|
|
|
|
.nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES,
|
|
|
|
};
|
2022-10-20 15:41:40 +00:00
|
|
|
struct mlx5_flow_template_table_cfg cfg = {
|
|
|
|
.attr = attr,
|
|
|
|
.external = false,
|
|
|
|
};
|
|
|
|
|
|
|
|
return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Creating the default Tx metadata copy table on NIC Tx group 0.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param pt
|
|
|
|
* Pointer to flow pattern template.
|
|
|
|
* @param at
|
|
|
|
* Pointer to flow actions template.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow table on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_template_table*
|
|
|
|
flow_hw_create_tx_default_mreg_copy_table(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_pattern_template *pt,
|
|
|
|
struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
struct rte_flow_template_table_attr tx_tbl_attr = {
|
|
|
|
.flow_attr = {
|
|
|
|
.group = 0, /* Root */
|
|
|
|
.priority = MLX5_HW_LOWEST_PRIO_ROOT,
|
|
|
|
.egress = 1,
|
|
|
|
},
|
|
|
|
.nb_flows = 1, /* One default flow rule for all. */
|
|
|
|
};
|
|
|
|
struct mlx5_flow_template_table_cfg tx_tbl_cfg = {
|
|
|
|
.attr = tx_tbl_attr,
|
|
|
|
.external = false,
|
|
|
|
};
|
|
|
|
struct rte_flow_error drop_err;
|
2022-10-20 15:41:39 +00:00
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
RTE_SET_USED(drop_err);
|
|
|
|
return flow_hw_table_create(dev, &tx_tbl_cfg, &pt, 1, &at, 1, &drop_err);
|
2022-10-20 15:41:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Creates a control flow table used to transfer traffic
|
|
|
|
* from group 0 to group 1.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param it
|
|
|
|
* Pointer to flow pattern template.
|
|
|
|
* @param at
|
|
|
|
* Pointer to flow actions template.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Pointer to flow table on success, NULL otherwise.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_template_table *
|
|
|
|
flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_pattern_template *it,
|
|
|
|
struct rte_flow_actions_template *at)
|
|
|
|
{
|
|
|
|
struct rte_flow_template_table_attr attr = {
|
|
|
|
.flow_attr = {
|
|
|
|
.group = 0,
|
2022-10-20 15:41:40 +00:00
|
|
|
.priority = MLX5_HW_LOWEST_PRIO_ROOT,
|
2022-10-20 15:41:39 +00:00
|
|
|
.ingress = 0,
|
|
|
|
.egress = 0,
|
|
|
|
.transfer = 1,
|
|
|
|
},
|
|
|
|
.nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES,
|
|
|
|
};
|
2022-10-20 15:41:40 +00:00
|
|
|
struct mlx5_flow_template_table_cfg cfg = {
|
|
|
|
.attr = attr,
|
|
|
|
.external = false,
|
|
|
|
};
|
2022-10-20 15:41:39 +00:00
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, NULL);
|
2022-10-20 15:41:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Creates a set of flow tables used to create control flows used
|
|
|
|
* when E-Switch is engaged.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, EINVAL otherwise
|
|
|
|
*/
|
|
|
|
static __rte_unused int
|
|
|
|
flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL;
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_pattern_template *port_items_tmpl = NULL;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL;
|
|
|
|
struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL;
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_actions_template *port_actions_tmpl = NULL;
|
|
|
|
struct rte_flow_actions_template *jump_one_actions_tmpl = NULL;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL;
|
|
|
|
uint32_t xmeta = priv->sh->config.dv_xmeta_en;
|
2022-10-20 15:41:39 +00:00
|
|
|
|
|
|
|
/* Item templates */
|
|
|
|
esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev);
|
|
|
|
if (!esw_mgr_items_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create E-Switch Manager item"
|
|
|
|
" template for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev);
|
|
|
|
if (!regc_sq_items_tmpl) {
|
2022-10-20 15:41:39 +00:00
|
|
|
DRV_LOG(ERR, "port %u failed to create SQ item template for"
|
|
|
|
" control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev);
|
|
|
|
if (!port_items_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create SQ item template for"
|
|
|
|
" control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
if (xmeta == MLX5_XMETA_MODE_META32_HWS) {
|
|
|
|
tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev);
|
|
|
|
if (!tx_meta_items_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern"
|
|
|
|
" template for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
/* Action templates */
|
2022-10-20 15:41:40 +00:00
|
|
|
regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev);
|
|
|
|
if (!regc_jump_actions_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template"
|
2022-10-20 15:41:39 +00:00
|
|
|
" for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev);
|
|
|
|
if (!port_actions_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create port action template"
|
|
|
|
" for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
|
|
|
|
(dev, MLX5_HW_LOWEST_USABLE_GROUP);
|
2022-10-20 15:41:39 +00:00
|
|
|
if (!jump_one_actions_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create jump action template"
|
|
|
|
" for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
if (xmeta == MLX5_XMETA_MODE_META32_HWS) {
|
|
|
|
tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev);
|
|
|
|
if (!tx_meta_actions_tmpl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to Tx metadata copy actions"
|
|
|
|
" template for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
/* Tables */
|
|
|
|
MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL);
|
|
|
|
priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table
|
2022-10-20 15:41:40 +00:00
|
|
|
(dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl);
|
2022-10-20 15:41:39 +00:00
|
|
|
if (!priv->hw_esw_sq_miss_root_tbl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)"
|
|
|
|
" for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL);
|
2022-10-20 15:41:40 +00:00
|
|
|
priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl,
|
2022-10-20 15:41:39 +00:00
|
|
|
port_actions_tmpl);
|
|
|
|
if (!priv->hw_esw_sq_miss_tbl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)"
|
|
|
|
" for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL);
|
|
|
|
priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl,
|
|
|
|
jump_one_actions_tmpl);
|
|
|
|
if (!priv->hw_esw_zero_tbl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create table for default jump to group 1"
|
|
|
|
" for control flows", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
if (xmeta == MLX5_XMETA_MODE_META32_HWS) {
|
|
|
|
MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL);
|
|
|
|
priv->hw_tx_meta_cpy_tbl = flow_hw_create_tx_default_mreg_copy_table(dev,
|
|
|
|
tx_meta_items_tmpl, tx_meta_actions_tmpl);
|
|
|
|
if (!priv->hw_tx_meta_cpy_tbl) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to create table for default"
|
|
|
|
" Tx metadata copy flow rule", dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
return 0;
|
|
|
|
error:
|
|
|
|
if (priv->hw_esw_zero_tbl) {
|
|
|
|
flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL);
|
|
|
|
priv->hw_esw_zero_tbl = NULL;
|
|
|
|
}
|
|
|
|
if (priv->hw_esw_sq_miss_tbl) {
|
|
|
|
flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL);
|
|
|
|
priv->hw_esw_sq_miss_tbl = NULL;
|
|
|
|
}
|
|
|
|
if (priv->hw_esw_sq_miss_root_tbl) {
|
|
|
|
flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL);
|
|
|
|
priv->hw_esw_sq_miss_root_tbl = NULL;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
if (xmeta == MLX5_XMETA_MODE_META32_HWS && tx_meta_actions_tmpl)
|
|
|
|
flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL);
|
2022-10-20 15:41:39 +00:00
|
|
|
if (jump_one_actions_tmpl)
|
|
|
|
flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL);
|
|
|
|
if (port_actions_tmpl)
|
|
|
|
flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL);
|
2022-10-20 15:41:40 +00:00
|
|
|
if (regc_jump_actions_tmpl)
|
|
|
|
flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL);
|
|
|
|
if (xmeta == MLX5_XMETA_MODE_META32_HWS && tx_meta_items_tmpl)
|
|
|
|
flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL);
|
2022-10-20 15:41:39 +00:00
|
|
|
if (port_items_tmpl)
|
|
|
|
flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL);
|
2022-10-20 15:41:40 +00:00
|
|
|
if (regc_sq_items_tmpl)
|
|
|
|
flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL);
|
2022-10-20 15:41:39 +00:00
|
|
|
if (esw_mgr_items_tmpl)
|
|
|
|
flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:44 +00:00
|
|
|
static void
|
|
|
|
flow_hw_ct_mng_destroy(struct rte_eth_dev *dev,
|
|
|
|
struct mlx5_aso_ct_pools_mng *ct_mng)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
|
|
|
|
mlx5_aso_ct_queue_uninit(priv->sh, ct_mng);
|
|
|
|
mlx5_free(ct_mng);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
flow_hw_ct_pool_destroy(struct rte_eth_dev *dev __rte_unused,
|
|
|
|
struct mlx5_aso_ct_pool *pool)
|
|
|
|
{
|
|
|
|
if (pool->dr_action)
|
|
|
|
mlx5dr_action_destroy(pool->dr_action);
|
|
|
|
if (pool->devx_obj)
|
|
|
|
claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
|
|
|
|
if (pool->cts)
|
|
|
|
mlx5_ipool_destroy(pool->cts);
|
|
|
|
mlx5_free(pool);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct mlx5_aso_ct_pool *
|
|
|
|
flow_hw_ct_pool_create(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_port_attr *port_attr)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_aso_ct_pool *pool;
|
|
|
|
struct mlx5_devx_obj *obj;
|
|
|
|
uint32_t nb_cts = rte_align32pow2(port_attr->nb_conn_tracks);
|
|
|
|
uint32_t log_obj_size = rte_log2_u32(nb_cts);
|
|
|
|
struct mlx5_indexed_pool_config cfg = {
|
|
|
|
.size = sizeof(struct mlx5_aso_ct_action),
|
|
|
|
.trunk_size = 1 << 12,
|
|
|
|
.per_core_cache = 1 << 13,
|
|
|
|
.need_lock = 1,
|
|
|
|
.release_mem_en = !!priv->sh->config.reclaim_mode,
|
|
|
|
.malloc = mlx5_malloc,
|
|
|
|
.free = mlx5_free,
|
|
|
|
.type = "mlx5_hw_ct_action",
|
|
|
|
};
|
|
|
|
int reg_id;
|
|
|
|
uint32_t flags;
|
|
|
|
|
|
|
|
pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY);
|
|
|
|
if (!pool) {
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
|
|
|
|
priv->sh->cdev->pdn,
|
|
|
|
log_obj_size);
|
|
|
|
if (!obj) {
|
|
|
|
rte_errno = ENODATA;
|
|
|
|
DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
pool->devx_obj = obj;
|
|
|
|
reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, NULL);
|
|
|
|
flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
|
|
|
|
if (priv->sh->config.dv_esw_en && priv->master)
|
|
|
|
flags |= MLX5DR_ACTION_FLAG_HWS_FDB;
|
|
|
|
pool->dr_action = mlx5dr_action_create_aso_ct(priv->dr_ctx,
|
|
|
|
(struct mlx5dr_devx_obj *)obj,
|
|
|
|
reg_id - REG_C_0, flags);
|
|
|
|
if (!pool->dr_action)
|
|
|
|
goto err;
|
|
|
|
/*
|
|
|
|
* No need for local cache if CT number is a small number. Since
|
|
|
|
* flow insertion rate will be very limited in that case. Here let's
|
|
|
|
* set the number to less than default trunk size 4K.
|
|
|
|
*/
|
|
|
|
if (nb_cts <= cfg.trunk_size) {
|
|
|
|
cfg.per_core_cache = 0;
|
|
|
|
cfg.trunk_size = nb_cts;
|
|
|
|
} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
|
|
|
|
cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
|
|
|
|
}
|
|
|
|
pool->cts = mlx5_ipool_create(&cfg);
|
|
|
|
if (!pool->cts)
|
|
|
|
goto err;
|
|
|
|
pool->sq = priv->ct_mng->aso_sqs;
|
|
|
|
/* Assign the last extra ASO SQ as public SQ. */
|
|
|
|
pool->shared_sq = &priv->ct_mng->aso_sqs[priv->nb_queue - 1];
|
|
|
|
return pool;
|
|
|
|
err:
|
|
|
|
flow_hw_ct_pool_destroy(dev, pool);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:41 +00:00
|
|
|
/**
|
|
|
|
* Configure port HWS resources.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] port_attr
|
|
|
|
* Port configuration attributes.
|
|
|
|
* @param[in] nb_queue
|
|
|
|
* Number of queue.
|
|
|
|
* @param[in] queue_attr
|
|
|
|
* Array that holds attributes for each flow queue.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, a negative errno value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_configure(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_port_attr *port_attr,
|
|
|
|
uint16_t nb_queue,
|
|
|
|
const struct rte_flow_queue_attr *queue_attr[],
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5dr_context *dr_ctx = NULL;
|
|
|
|
struct mlx5dr_context_attr dr_ctx_attr = {0};
|
|
|
|
struct mlx5_hw_q *hw_q;
|
|
|
|
struct mlx5_hw_q_job *job = NULL;
|
|
|
|
uint32_t mem_size, i, j;
|
2022-02-24 13:40:47 +00:00
|
|
|
struct mlx5_indexed_pool_config cfg = {
|
2022-06-02 11:39:16 +00:00
|
|
|
.size = sizeof(struct mlx5_action_construct_data),
|
2022-02-24 13:40:47 +00:00
|
|
|
.trunk_size = 4096,
|
|
|
|
.need_lock = 1,
|
|
|
|
.release_mem_en = !!priv->sh->config.reclaim_mode,
|
|
|
|
.malloc = mlx5_malloc,
|
|
|
|
.free = mlx5_free,
|
|
|
|
.type = "mlx5_hw_action_construct_data",
|
|
|
|
};
|
2022-10-20 15:41:39 +00:00
|
|
|
/* Adds one queue to be used by PMD.
|
|
|
|
* The last queue will be used by the PMD.
|
|
|
|
*/
|
|
|
|
uint16_t nb_q_updated;
|
|
|
|
struct rte_flow_queue_attr **_queue_attr = NULL;
|
|
|
|
struct rte_flow_queue_attr ctrl_queue_attr = {0};
|
|
|
|
bool is_proxy = !!(priv->sh->config.dv_esw_en && priv->master);
|
2022-10-20 15:41:40 +00:00
|
|
|
int ret = 0;
|
2022-02-24 13:40:41 +00:00
|
|
|
|
|
|
|
if (!port_attr || !nb_queue || !queue_attr) {
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
/* In case re-configuring, release existing context at first. */
|
|
|
|
if (priv->dr_ctx) {
|
|
|
|
/* */
|
2022-10-20 15:41:39 +00:00
|
|
|
for (i = 0; i < priv->nb_queue; i++) {
|
2022-02-24 13:40:41 +00:00
|
|
|
hw_q = &priv->hw_q[i];
|
|
|
|
/* Make sure all queues are empty. */
|
|
|
|
if (hw_q->size != hw_q->job_idx) {
|
|
|
|
rte_errno = EBUSY;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
flow_hw_resource_release(dev);
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
ctrl_queue_attr.size = queue_attr[0]->size;
|
|
|
|
nb_q_updated = nb_queue + 1;
|
|
|
|
_queue_attr = mlx5_malloc(MLX5_MEM_ZERO,
|
|
|
|
nb_q_updated *
|
|
|
|
sizeof(struct rte_flow_queue_attr *),
|
|
|
|
64, SOCKET_ID_ANY);
|
|
|
|
if (!_queue_attr) {
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(_queue_attr, queue_attr,
|
|
|
|
sizeof(void *) * nb_queue);
|
|
|
|
_queue_attr[nb_queue] = &ctrl_queue_attr;
|
2022-02-24 13:40:47 +00:00
|
|
|
priv->acts_ipool = mlx5_ipool_create(&cfg);
|
|
|
|
if (!priv->acts_ipool)
|
|
|
|
goto err;
|
2022-02-24 13:40:41 +00:00
|
|
|
/* Allocate the queue job descriptor LIFO. */
|
2022-10-20 15:41:39 +00:00
|
|
|
mem_size = sizeof(priv->hw_q[0]) * nb_q_updated;
|
|
|
|
for (i = 0; i < nb_q_updated; i++) {
|
2022-02-24 13:40:41 +00:00
|
|
|
/*
|
|
|
|
* Check if the queues' size are all the same as the
|
|
|
|
* limitation from HWS layer.
|
|
|
|
*/
|
2022-10-20 15:41:39 +00:00
|
|
|
if (_queue_attr[i]->size != _queue_attr[0]->size) {
|
2022-02-24 13:40:41 +00:00
|
|
|
rte_errno = EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
mem_size += (sizeof(struct mlx5_hw_q_job *) +
|
2022-10-20 15:41:39 +00:00
|
|
|
sizeof(struct mlx5_hw_q_job) +
|
2022-02-24 13:40:51 +00:00
|
|
|
sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN +
|
2022-10-20 15:41:38 +00:00
|
|
|
sizeof(struct mlx5_modification_cmd) *
|
|
|
|
MLX5_MHDR_MAX_CMD +
|
2022-10-20 15:41:39 +00:00
|
|
|
sizeof(struct rte_flow_item) *
|
|
|
|
MLX5_HW_MAX_ITEMS) *
|
|
|
|
_queue_attr[i]->size;
|
2022-02-24 13:40:41 +00:00
|
|
|
}
|
|
|
|
priv->hw_q = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
|
|
|
|
64, SOCKET_ID_ANY);
|
|
|
|
if (!priv->hw_q) {
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
goto err;
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
for (i = 0; i < nb_q_updated; i++) {
|
2022-02-24 13:40:51 +00:00
|
|
|
uint8_t *encap = NULL;
|
2022-10-20 15:41:38 +00:00
|
|
|
struct mlx5_modification_cmd *mhdr_cmd = NULL;
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_item *items = NULL;
|
2022-02-24 13:40:51 +00:00
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
priv->hw_q[i].job_idx = _queue_attr[i]->size;
|
|
|
|
priv->hw_q[i].size = _queue_attr[i]->size;
|
2022-02-24 13:40:41 +00:00
|
|
|
if (i == 0)
|
|
|
|
priv->hw_q[i].job = (struct mlx5_hw_q_job **)
|
2022-10-20 15:41:39 +00:00
|
|
|
&priv->hw_q[nb_q_updated];
|
2022-02-24 13:40:41 +00:00
|
|
|
else
|
|
|
|
priv->hw_q[i].job = (struct mlx5_hw_q_job **)
|
2022-10-20 15:41:39 +00:00
|
|
|
&job[_queue_attr[i - 1]->size - 1].items
|
|
|
|
[MLX5_HW_MAX_ITEMS];
|
2022-02-24 13:40:41 +00:00
|
|
|
job = (struct mlx5_hw_q_job *)
|
2022-10-20 15:41:39 +00:00
|
|
|
&priv->hw_q[i].job[_queue_attr[i]->size];
|
|
|
|
mhdr_cmd = (struct mlx5_modification_cmd *)
|
|
|
|
&job[_queue_attr[i]->size];
|
|
|
|
encap = (uint8_t *)
|
|
|
|
&mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD];
|
|
|
|
items = (struct rte_flow_item *)
|
|
|
|
&encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN];
|
|
|
|
for (j = 0; j < _queue_attr[i]->size; j++) {
|
2022-10-20 15:41:38 +00:00
|
|
|
job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD];
|
2022-02-24 13:40:51 +00:00
|
|
|
job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN];
|
2022-10-20 15:41:39 +00:00
|
|
|
job[j].items = &items[j * MLX5_HW_MAX_ITEMS];
|
2022-02-24 13:40:41 +00:00
|
|
|
priv->hw_q[i].job[j] = &job[j];
|
2022-02-24 13:40:51 +00:00
|
|
|
}
|
2022-02-24 13:40:41 +00:00
|
|
|
}
|
|
|
|
dr_ctx_attr.pd = priv->sh->cdev->pd;
|
2022-10-20 15:41:39 +00:00
|
|
|
dr_ctx_attr.queues = nb_q_updated;
|
2022-02-24 13:40:41 +00:00
|
|
|
/* Queue size should all be the same. Take the first one. */
|
2022-10-20 15:41:39 +00:00
|
|
|
dr_ctx_attr.queue_size = _queue_attr[0]->size;
|
2022-02-24 13:40:41 +00:00
|
|
|
dr_ctx = mlx5dr_context_open(priv->sh->cdev->ctx, &dr_ctx_attr);
|
|
|
|
/* rte_errno has been updated by HWS layer. */
|
|
|
|
if (!dr_ctx)
|
|
|
|
goto err;
|
|
|
|
priv->dr_ctx = dr_ctx;
|
2022-10-20 15:41:39 +00:00
|
|
|
priv->nb_queue = nb_q_updated;
|
|
|
|
rte_spinlock_init(&priv->hw_ctrl_lock);
|
|
|
|
LIST_INIT(&priv->hw_ctrl_flows);
|
2022-10-20 15:41:41 +00:00
|
|
|
/* Initialize meter library*/
|
|
|
|
if (port_attr->nb_meters)
|
|
|
|
if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1))
|
|
|
|
goto err;
|
2022-02-24 13:40:44 +00:00
|
|
|
/* Add global actions. */
|
|
|
|
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
|
2022-10-20 15:41:39 +00:00
|
|
|
uint32_t act_flags = 0;
|
|
|
|
|
|
|
|
act_flags = mlx5_hw_act_flag[i][0] | mlx5_hw_act_flag[i][1];
|
|
|
|
if (is_proxy)
|
|
|
|
act_flags |= mlx5_hw_act_flag[i][2];
|
|
|
|
priv->hw_drop[i] = mlx5dr_action_create_dest_drop(priv->dr_ctx, act_flags);
|
|
|
|
if (!priv->hw_drop[i])
|
|
|
|
goto err;
|
2022-02-24 13:40:49 +00:00
|
|
|
priv->hw_tag[i] = mlx5dr_action_create_tag
|
|
|
|
(priv->dr_ctx, mlx5_hw_act_flag[i][0]);
|
|
|
|
if (!priv->hw_tag[i])
|
|
|
|
goto err;
|
2022-02-24 13:40:44 +00:00
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
if (is_proxy) {
|
|
|
|
ret = flow_hw_create_vport_actions(priv);
|
|
|
|
if (ret) {
|
|
|
|
rte_errno = -ret;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
ret = flow_hw_create_ctrl_tables(dev);
|
|
|
|
if (ret) {
|
|
|
|
rte_errno = -ret;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (_queue_attr)
|
|
|
|
mlx5_free(_queue_attr);
|
2022-10-20 15:41:44 +00:00
|
|
|
if (port_attr->nb_conn_tracks) {
|
|
|
|
mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
|
|
|
|
sizeof(*priv->ct_mng);
|
|
|
|
priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
|
|
|
|
RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
|
|
|
|
if (!priv->ct_mng)
|
|
|
|
goto err;
|
|
|
|
if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
|
|
|
|
goto err;
|
|
|
|
priv->hws_ctpool = flow_hw_ct_pool_create(dev, port_attr);
|
|
|
|
if (!priv->hws_ctpool)
|
|
|
|
goto err;
|
|
|
|
priv->sh->ct_aso_en = 1;
|
|
|
|
}
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (port_attr->nb_counters) {
|
|
|
|
priv->hws_cpool = mlx5_hws_cnt_pool_create(dev, port_attr,
|
|
|
|
nb_queue);
|
|
|
|
if (priv->hws_cpool == NULL)
|
|
|
|
goto err;
|
|
|
|
}
|
2022-02-24 13:40:41 +00:00
|
|
|
return 0;
|
|
|
|
err:
|
2022-10-20 15:41:44 +00:00
|
|
|
if (priv->hws_ctpool) {
|
|
|
|
flow_hw_ct_pool_destroy(dev, priv->hws_ctpool);
|
|
|
|
priv->hws_ctpool = NULL;
|
|
|
|
}
|
|
|
|
if (priv->ct_mng) {
|
|
|
|
flow_hw_ct_mng_destroy(dev, priv->ct_mng);
|
|
|
|
priv->ct_mng = NULL;
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
flow_hw_free_vport_actions(priv);
|
2022-02-24 13:40:44 +00:00
|
|
|
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
|
2022-10-20 15:41:39 +00:00
|
|
|
if (priv->hw_drop[i])
|
|
|
|
mlx5dr_action_destroy(priv->hw_drop[i]);
|
2022-02-24 13:40:49 +00:00
|
|
|
if (priv->hw_tag[i])
|
|
|
|
mlx5dr_action_destroy(priv->hw_tag[i]);
|
2022-02-24 13:40:44 +00:00
|
|
|
}
|
2022-02-24 13:40:41 +00:00
|
|
|
if (dr_ctx)
|
|
|
|
claim_zero(mlx5dr_context_close(dr_ctx));
|
|
|
|
mlx5_free(priv->hw_q);
|
|
|
|
priv->hw_q = NULL;
|
2022-02-24 13:40:47 +00:00
|
|
|
if (priv->acts_ipool) {
|
|
|
|
mlx5_ipool_destroy(priv->acts_ipool);
|
|
|
|
priv->acts_ipool = NULL;
|
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
if (_queue_attr)
|
|
|
|
mlx5_free(_queue_attr);
|
2022-10-20 15:41:40 +00:00
|
|
|
/* Do not overwrite the internal errno information. */
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2022-02-24 13:40:41 +00:00
|
|
|
return rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"fail to configure port");
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Release HWS resources.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
flow_hw_resource_release(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-02-24 13:40:44 +00:00
|
|
|
struct rte_flow_template_table *tbl;
|
2022-02-24 13:40:42 +00:00
|
|
|
struct rte_flow_pattern_template *it;
|
2022-02-24 13:40:43 +00:00
|
|
|
struct rte_flow_actions_template *at;
|
2022-10-20 15:41:39 +00:00
|
|
|
int i;
|
2022-02-24 13:40:41 +00:00
|
|
|
|
|
|
|
if (!priv->dr_ctx)
|
|
|
|
return;
|
2022-10-20 15:41:39 +00:00
|
|
|
flow_hw_rxq_flag_set(dev, false);
|
|
|
|
flow_hw_flush_all_ctrl_flows(dev);
|
2022-10-20 15:41:43 +00:00
|
|
|
while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) {
|
|
|
|
tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo);
|
|
|
|
flow_hw_table_destroy(dev, tbl, NULL);
|
|
|
|
}
|
2022-02-24 13:40:44 +00:00
|
|
|
while (!LIST_EMPTY(&priv->flow_hw_tbl)) {
|
|
|
|
tbl = LIST_FIRST(&priv->flow_hw_tbl);
|
|
|
|
flow_hw_table_destroy(dev, tbl, NULL);
|
|
|
|
}
|
2022-02-24 13:40:42 +00:00
|
|
|
while (!LIST_EMPTY(&priv->flow_hw_itt)) {
|
|
|
|
it = LIST_FIRST(&priv->flow_hw_itt);
|
|
|
|
flow_hw_pattern_template_destroy(dev, it, NULL);
|
|
|
|
}
|
2022-02-24 13:40:43 +00:00
|
|
|
while (!LIST_EMPTY(&priv->flow_hw_at)) {
|
|
|
|
at = LIST_FIRST(&priv->flow_hw_at);
|
|
|
|
flow_hw_actions_template_destroy(dev, at, NULL);
|
|
|
|
}
|
2022-02-24 13:40:44 +00:00
|
|
|
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
|
2022-10-20 15:41:39 +00:00
|
|
|
if (priv->hw_drop[i])
|
|
|
|
mlx5dr_action_destroy(priv->hw_drop[i]);
|
2022-02-24 13:40:49 +00:00
|
|
|
if (priv->hw_tag[i])
|
|
|
|
mlx5dr_action_destroy(priv->hw_tag[i]);
|
2022-02-24 13:40:44 +00:00
|
|
|
}
|
2022-10-20 15:41:39 +00:00
|
|
|
flow_hw_free_vport_actions(priv);
|
2022-02-24 13:40:47 +00:00
|
|
|
if (priv->acts_ipool) {
|
|
|
|
mlx5_ipool_destroy(priv->acts_ipool);
|
|
|
|
priv->acts_ipool = NULL;
|
|
|
|
}
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
if (priv->hws_cpool)
|
|
|
|
mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool);
|
2022-10-20 15:41:44 +00:00
|
|
|
if (priv->hws_ctpool) {
|
|
|
|
flow_hw_ct_pool_destroy(dev, priv->hws_ctpool);
|
|
|
|
priv->hws_ctpool = NULL;
|
|
|
|
}
|
|
|
|
if (priv->ct_mng) {
|
|
|
|
flow_hw_ct_mng_destroy(dev, priv->ct_mng);
|
|
|
|
priv->ct_mng = NULL;
|
|
|
|
}
|
2022-02-24 13:40:41 +00:00
|
|
|
mlx5_free(priv->hw_q);
|
|
|
|
priv->hw_q = NULL;
|
|
|
|
claim_zero(mlx5dr_context_close(priv->dr_ctx));
|
|
|
|
priv->dr_ctx = NULL;
|
|
|
|
priv->nb_queue = 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:57:34 +00:00
|
|
|
/* Sets vport tag and mask, for given port, used in HWS rules. */
|
|
|
|
void
|
|
|
|
flow_hw_set_port_info(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
uint16_t port_id = dev->data->port_id;
|
|
|
|
struct flow_hw_port_info *info;
|
|
|
|
|
|
|
|
MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS);
|
|
|
|
info = &mlx5_flow_hw_port_infos[port_id];
|
|
|
|
info->regc_mask = priv->vport_meta_mask;
|
|
|
|
info->regc_value = priv->vport_meta_tag;
|
|
|
|
info->is_wire = priv->master;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Clears vport tag and mask used for HWS rules. */
|
|
|
|
void
|
|
|
|
flow_hw_clear_port_info(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
uint16_t port_id = dev->data->port_id;
|
|
|
|
struct flow_hw_port_info *info;
|
|
|
|
|
|
|
|
MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS);
|
|
|
|
info = &mlx5_flow_hw_port_infos[port_id];
|
|
|
|
info->regc_mask = 0;
|
|
|
|
info->regc_value = 0;
|
|
|
|
info->is_wire = 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:57:36 +00:00
|
|
|
/*
|
|
|
|
* Initialize the information of available tag registers and an intersection
|
|
|
|
* of all the probed devices' REG_C_Xs.
|
|
|
|
* PS. No port concept in steering part, right now it cannot be per port level.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
*/
|
|
|
|
void flow_hw_init_tags_set(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
uint32_t meta_mode = priv->sh->config.dv_xmeta_en;
|
|
|
|
uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c;
|
|
|
|
uint32_t i, j;
|
|
|
|
enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON};
|
|
|
|
uint8_t unset = 0;
|
|
|
|
uint8_t copy_masks = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The CAPA is global for common device but only used in net.
|
|
|
|
* It is shared per eswitch domain.
|
|
|
|
*/
|
|
|
|
if (!!priv->sh->hws_tags)
|
|
|
|
return;
|
|
|
|
unset |= 1 << (priv->mtr_color_reg - REG_C_0);
|
|
|
|
unset |= 1 << (REG_C_6 - REG_C_0);
|
2022-10-20 15:41:40 +00:00
|
|
|
if (priv->sh->config.dv_esw_en)
|
2022-10-20 15:57:36 +00:00
|
|
|
unset |= 1 << (REG_C_0 - REG_C_0);
|
2022-10-20 15:41:40 +00:00
|
|
|
if (meta_mode == MLX5_XMETA_MODE_META32_HWS)
|
|
|
|
unset |= 1 << (REG_C_1 - REG_C_0);
|
2022-10-20 15:57:36 +00:00
|
|
|
masks &= ~unset;
|
|
|
|
if (mlx5_flow_hw_avl_tags_init_cnt) {
|
2022-10-20 15:41:44 +00:00
|
|
|
MLX5_ASSERT(mlx5_flow_hw_aso_tag == priv->mtr_color_reg);
|
2022-10-20 15:57:36 +00:00
|
|
|
for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
|
|
|
|
if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) {
|
|
|
|
copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] =
|
|
|
|
mlx5_flow_hw_avl_tags[i];
|
2022-10-20 15:41:40 +00:00
|
|
|
copy_masks |= (1 << (mlx5_flow_hw_avl_tags[i] - REG_C_0));
|
2022-10-20 15:57:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (copy_masks != masks) {
|
|
|
|
j = 0;
|
|
|
|
for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++)
|
|
|
|
if (!!((1 << i) & copy_masks))
|
|
|
|
mlx5_flow_hw_avl_tags[j++] = copy[i];
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
j = 0;
|
|
|
|
for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
|
|
|
|
if (!!((1 << i) & masks))
|
|
|
|
mlx5_flow_hw_avl_tags[j++] =
|
|
|
|
(enum modify_reg)(i + (uint32_t)REG_C_0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
priv->sh->hws_tags = 1;
|
2022-10-20 15:41:44 +00:00
|
|
|
mlx5_flow_hw_aso_tag = (enum modify_reg)priv->mtr_color_reg;
|
2022-10-20 15:57:36 +00:00
|
|
|
mlx5_flow_hw_avl_tags_init_cnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset the available tag registers information to NONE.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
*/
|
|
|
|
void flow_hw_clear_tags_set(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
|
|
|
|
if (!priv->sh->hws_tags)
|
|
|
|
return;
|
|
|
|
priv->sh->hws_tags = 0;
|
|
|
|
mlx5_flow_hw_avl_tags_init_cnt--;
|
|
|
|
if (!mlx5_flow_hw_avl_tags_init_cnt)
|
|
|
|
memset(mlx5_flow_hw_avl_tags, REG_NON,
|
|
|
|
sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:43 +00:00
|
|
|
uint32_t mlx5_flow_hw_flow_metadata_config_refcnt;
|
|
|
|
uint8_t mlx5_flow_hw_flow_metadata_esw_en;
|
|
|
|
uint8_t mlx5_flow_hw_flow_metadata_xmeta_en;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Initializes static configuration of META flow items.
|
|
|
|
*
|
|
|
|
* As a temporary workaround, META flow item is translated to a register,
|
|
|
|
* based on statically saved dv_esw_en and dv_xmeta_en device arguments.
|
|
|
|
* It is a workaround for flow_hw_get_reg_id() where port specific information
|
|
|
|
* is not available at runtime.
|
|
|
|
*
|
|
|
|
* Values of dv_esw_en and dv_xmeta_en device arguments are taken from the first opened port.
|
|
|
|
* This means that each mlx5 port will use the same configuration for translation
|
|
|
|
* of META flow items.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
flow_hw_init_flow_metadata_config(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
uint32_t refcnt;
|
|
|
|
|
|
|
|
refcnt = __atomic_fetch_add(&mlx5_flow_hw_flow_metadata_config_refcnt, 1,
|
|
|
|
__ATOMIC_RELAXED);
|
|
|
|
if (refcnt > 0)
|
|
|
|
return;
|
|
|
|
mlx5_flow_hw_flow_metadata_esw_en = MLX5_SH(dev)->config.dv_esw_en;
|
|
|
|
mlx5_flow_hw_flow_metadata_xmeta_en = MLX5_SH(dev)->config.dv_xmeta_en;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Clears statically stored configuration related to META flow items.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
flow_hw_clear_flow_metadata_config(void)
|
|
|
|
{
|
|
|
|
uint32_t refcnt;
|
|
|
|
|
|
|
|
refcnt = __atomic_sub_fetch(&mlx5_flow_hw_flow_metadata_config_refcnt, 1,
|
|
|
|
__ATOMIC_RELAXED);
|
|
|
|
if (refcnt > 0)
|
|
|
|
return;
|
|
|
|
mlx5_flow_hw_flow_metadata_esw_en = 0;
|
|
|
|
mlx5_flow_hw_flow_metadata_xmeta_en = 0;
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:44 +00:00
|
|
|
static int
|
|
|
|
flow_hw_conntrack_destroy(struct rte_eth_dev *dev __rte_unused,
|
|
|
|
uint32_t idx,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
|
|
|
|
uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
|
|
|
|
struct rte_eth_dev *owndev = &rte_eth_devices[owner];
|
|
|
|
struct mlx5_priv *priv = owndev->data->dev_private;
|
|
|
|
struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
|
|
|
|
struct mlx5_aso_ct_action *ct;
|
|
|
|
|
|
|
|
ct = mlx5_ipool_get(pool->cts, ct_idx);
|
|
|
|
if (!ct) {
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Invalid CT destruction index");
|
|
|
|
}
|
|
|
|
__atomic_store_n(&ct->state, ASO_CONNTRACK_FREE,
|
|
|
|
__ATOMIC_RELAXED);
|
|
|
|
mlx5_ipool_free(pool->cts, ct_idx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t idx,
|
|
|
|
struct rte_flow_action_conntrack *profile,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
|
|
|
|
struct mlx5_aso_ct_action *ct;
|
|
|
|
uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
|
|
|
|
uint32_t ct_idx;
|
|
|
|
|
|
|
|
if (owner != PORT_ID(priv))
|
|
|
|
return rte_flow_error_set(error, EACCES,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Can't query CT object owned by another port");
|
|
|
|
ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
|
|
|
|
ct = mlx5_ipool_get(pool->cts, ct_idx);
|
|
|
|
if (!ct) {
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Invalid CT query index");
|
|
|
|
}
|
|
|
|
profile->peer_port = ct->peer;
|
|
|
|
profile->is_original_dir = ct->is_original;
|
|
|
|
if (mlx5_aso_ct_query_by_wqe(priv->sh, MLX5_HW_INV_QUEUE, ct, profile))
|
|
|
|
return rte_flow_error_set(error, EIO,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Failed to query CT context");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
|
|
|
|
const struct rte_flow_modify_conntrack *action_conf,
|
|
|
|
uint32_t idx, struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
|
|
|
|
struct mlx5_aso_ct_action *ct;
|
|
|
|
const struct rte_flow_action_conntrack *new_prf;
|
|
|
|
uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
|
|
|
|
uint32_t ct_idx;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (PORT_ID(priv) != owner)
|
|
|
|
return rte_flow_error_set(error, EACCES,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Can't update CT object owned by another port");
|
|
|
|
ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
|
|
|
|
ct = mlx5_ipool_get(pool->cts, ct_idx);
|
|
|
|
if (!ct) {
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Invalid CT update index");
|
|
|
|
}
|
|
|
|
new_prf = &action_conf->new_ct;
|
|
|
|
if (action_conf->direction)
|
|
|
|
ct->is_original = !!new_prf->is_original_dir;
|
|
|
|
if (action_conf->state) {
|
|
|
|
/* Only validate the profile when it needs to be updated. */
|
|
|
|
ret = mlx5_validate_action_ct(dev, new_prf, error);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = mlx5_aso_ct_update_by_wqe(priv->sh, queue, ct, new_prf);
|
|
|
|
if (ret)
|
|
|
|
return rte_flow_error_set(error, EIO,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Failed to send CT context update WQE");
|
|
|
|
if (queue != MLX5_HW_INV_QUEUE)
|
|
|
|
return 0;
|
|
|
|
/* Block until ready or a failure in synchronous mode. */
|
|
|
|
ret = mlx5_aso_ct_available(priv->sh, queue, ct);
|
|
|
|
if (ret)
|
|
|
|
rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Timeout to get the CT update");
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rte_flow_action_handle *
|
|
|
|
flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
|
|
|
|
const struct rte_flow_action_conntrack *pro,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
|
|
|
|
struct mlx5_aso_ct_action *ct;
|
|
|
|
uint32_t ct_idx = 0;
|
|
|
|
int ret;
|
|
|
|
bool async = !!(queue != MLX5_HW_INV_QUEUE);
|
|
|
|
|
|
|
|
if (!pool) {
|
|
|
|
rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
|
|
|
|
"CT is not enabled");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
|
|
|
|
if (!ct) {
|
|
|
|
rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
|
|
|
|
"Failed to allocate CT object");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
ct->offset = ct_idx - 1;
|
|
|
|
ct->is_original = !!pro->is_original_dir;
|
|
|
|
ct->peer = pro->peer_port;
|
|
|
|
ct->pool = pool;
|
|
|
|
if (mlx5_aso_ct_update_by_wqe(priv->sh, queue, ct, pro)) {
|
|
|
|
mlx5_ipool_free(pool->cts, ct_idx);
|
|
|
|
rte_flow_error_set(error, EBUSY,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
|
|
|
|
"Failed to update CT");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!async) {
|
|
|
|
ret = mlx5_aso_ct_available(priv->sh, queue, ct);
|
|
|
|
if (ret) {
|
|
|
|
mlx5_ipool_free(pool->cts, ct_idx);
|
|
|
|
rte_flow_error_set(error, rte_errno,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL,
|
|
|
|
"Timeout to get the CT update");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return (struct rte_flow_action_handle *)(uintptr_t)
|
|
|
|
MLX5_ACTION_CTX_CT_GEN_IDX(PORT_ID(priv), ct_idx);
|
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
/**
|
|
|
|
* Create shared action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* Which queue to be used..
|
|
|
|
* @param[in] attr
|
|
|
|
* Operation attribute.
|
|
|
|
* @param[in] conf
|
|
|
|
* Indirect action configuration.
|
|
|
|
* @param[in] action
|
|
|
|
* rte_flow action detail.
|
|
|
|
* @param[in] user_data
|
|
|
|
* Pointer to the user_data.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* Action handle on success, NULL otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_action_handle *
|
|
|
|
flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
|
|
|
|
const struct rte_flow_op_attr *attr,
|
|
|
|
const struct rte_flow_indir_action_conf *conf,
|
|
|
|
const struct rte_flow_action *action,
|
|
|
|
void *user_data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
struct rte_flow_action_handle *handle = NULL;
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
cnt_id_t cnt_id;
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
RTE_SET_USED(queue);
|
|
|
|
RTE_SET_USED(attr);
|
|
|
|
RTE_SET_USED(user_data);
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
switch (action->type) {
|
|
|
|
case RTE_FLOW_ACTION_TYPE_COUNT:
|
|
|
|
if (mlx5_hws_cnt_shared_get(priv->hws_cpool, &cnt_id))
|
|
|
|
rte_flow_error_set(error, ENODEV,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
NULL,
|
|
|
|
"counter are not configured!");
|
|
|
|
else
|
|
|
|
handle = (struct rte_flow_action_handle *)
|
|
|
|
(uintptr_t)cnt_id;
|
|
|
|
break;
|
2022-10-20 15:41:44 +00:00
|
|
|
case RTE_FLOW_ACTION_TYPE_CONNTRACK:
|
|
|
|
handle = flow_hw_conntrack_create(dev, queue, action->conf, error);
|
|
|
|
break;
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
default:
|
|
|
|
handle = flow_dv_action_create(dev, conf, action, error);
|
|
|
|
}
|
|
|
|
return handle;
|
2022-02-24 13:40:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Update shared action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* Which queue to be used..
|
|
|
|
* @param[in] attr
|
|
|
|
* Operation attribute.
|
|
|
|
* @param[in] handle
|
|
|
|
* Action handle to be updated.
|
|
|
|
* @param[in] update
|
|
|
|
* Update value.
|
|
|
|
* @param[in] user_data
|
|
|
|
* Pointer to the user_data.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
|
|
|
|
const struct rte_flow_op_attr *attr,
|
|
|
|
struct rte_flow_action_handle *handle,
|
|
|
|
const void *update,
|
|
|
|
void *user_data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
2022-10-20 15:41:44 +00:00
|
|
|
uint32_t act_idx = (uint32_t)(uintptr_t)handle;
|
|
|
|
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
RTE_SET_USED(queue);
|
|
|
|
RTE_SET_USED(attr);
|
|
|
|
RTE_SET_USED(user_data);
|
2022-10-20 15:41:44 +00:00
|
|
|
switch (type) {
|
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_CT:
|
|
|
|
return flow_hw_conntrack_update(dev, queue, update, act_idx, error);
|
|
|
|
default:
|
|
|
|
return flow_dv_action_update(dev, handle, update, error);
|
|
|
|
}
|
2022-02-24 13:40:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroy shared action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the rte_eth_dev structure.
|
|
|
|
* @param[in] queue
|
|
|
|
* Which queue to be used..
|
|
|
|
* @param[in] attr
|
|
|
|
* Operation attribute.
|
|
|
|
* @param[in] handle
|
|
|
|
* Action handle to be destroyed.
|
|
|
|
* @param[in] user_data
|
|
|
|
* Pointer to the user_data.
|
|
|
|
* @param[out] error
|
|
|
|
* Pointer to error structure.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative value otherwise and rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
|
|
|
|
const struct rte_flow_op_attr *attr,
|
|
|
|
struct rte_flow_action_handle *handle,
|
|
|
|
void *user_data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
uint32_t act_idx = (uint32_t)(uintptr_t)handle;
|
|
|
|
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
|
2022-02-24 13:40:50 +00:00
|
|
|
RTE_SET_USED(queue);
|
|
|
|
RTE_SET_USED(attr);
|
|
|
|
RTE_SET_USED(user_data);
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
switch (type) {
|
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_COUNT:
|
|
|
|
return mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx);
|
2022-10-20 15:41:44 +00:00
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_CT:
|
|
|
|
return flow_hw_conntrack_destroy(dev, act_idx, error);
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
default:
|
|
|
|
return flow_dv_action_destroy(dev, handle, error);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_query_counter(const struct rte_eth_dev *dev, uint32_t counter,
|
|
|
|
void *data, struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_hws_cnt *cnt;
|
|
|
|
struct rte_flow_query_count *qc = data;
|
|
|
|
uint32_t iidx = mlx5_hws_cnt_iidx(priv->hws_cpool, counter);
|
|
|
|
uint64_t pkts, bytes;
|
|
|
|
|
|
|
|
if (!mlx5_hws_cnt_id_valid(counter))
|
|
|
|
return rte_flow_error_set(error, EINVAL,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
|
|
|
|
"counter are not available");
|
|
|
|
cnt = &priv->hws_cpool->pool[iidx];
|
|
|
|
__hws_cnt_query_raw(priv->hws_cpool, counter, &pkts, &bytes);
|
|
|
|
qc->hits_set = 1;
|
|
|
|
qc->bytes_set = 1;
|
|
|
|
qc->hits = pkts - cnt->reset.hits;
|
|
|
|
qc->bytes = bytes - cnt->reset.bytes;
|
|
|
|
if (qc->reset) {
|
|
|
|
cnt->reset.bytes = bytes;
|
|
|
|
cnt->reset.hits = pkts;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_query(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow *flow __rte_unused,
|
|
|
|
const struct rte_flow_action *actions __rte_unused,
|
|
|
|
void *data __rte_unused,
|
|
|
|
struct rte_flow_error *error __rte_unused)
|
|
|
|
{
|
|
|
|
int ret = -EINVAL;
|
|
|
|
struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow;
|
|
|
|
|
|
|
|
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
|
|
|
|
switch (actions->type) {
|
|
|
|
case RTE_FLOW_ACTION_TYPE_VOID:
|
|
|
|
break;
|
|
|
|
case RTE_FLOW_ACTION_TYPE_COUNT:
|
|
|
|
ret = flow_hw_query_counter(dev, hw_flow->cnt_id, data,
|
|
|
|
error);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return rte_flow_error_set(error, ENOTSUP,
|
|
|
|
RTE_FLOW_ERROR_TYPE_ACTION,
|
|
|
|
actions,
|
|
|
|
"action not supported");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Create indirect action.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the Ethernet device structure.
|
|
|
|
* @param[in] conf
|
|
|
|
* Shared action configuration.
|
|
|
|
* @param[in] action
|
|
|
|
* Action specification used to create indirect action.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL. Initialized in case of
|
|
|
|
* error only.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* A valid shared action handle in case of success, NULL otherwise and
|
|
|
|
* rte_errno is set.
|
|
|
|
*/
|
|
|
|
static struct rte_flow_action_handle *
|
|
|
|
flow_hw_action_create(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_indir_action_conf *conf,
|
|
|
|
const struct rte_flow_action *action,
|
|
|
|
struct rte_flow_error *err)
|
|
|
|
{
|
|
|
|
return flow_hw_action_handle_create(dev, UINT32_MAX, NULL, conf, action,
|
|
|
|
NULL, err);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroy the indirect action.
|
|
|
|
* Release action related resources on the NIC and the memory.
|
|
|
|
* Lock free, (mutex should be acquired by caller).
|
|
|
|
* Dispatcher for action type specific call.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the Ethernet device structure.
|
|
|
|
* @param[in] handle
|
|
|
|
* The indirect action object handle to be removed.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL. Initialized in case of
|
|
|
|
* error only.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, otherwise negative errno value.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_action_destroy(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_action_handle *handle,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
return flow_hw_action_handle_destroy(dev, UINT32_MAX, NULL, handle,
|
|
|
|
NULL, error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Updates in place shared action configuration.
|
|
|
|
*
|
|
|
|
* @param[in] dev
|
|
|
|
* Pointer to the Ethernet device structure.
|
|
|
|
* @param[in] handle
|
|
|
|
* The indirect action object handle to be updated.
|
|
|
|
* @param[in] update
|
|
|
|
* Action specification used to modify the action pointed by *handle*.
|
|
|
|
* *update* could be of same type with the action pointed by the *handle*
|
|
|
|
* handle argument, or some other structures like a wrapper, depending on
|
|
|
|
* the indirect action type.
|
|
|
|
* @param[out] error
|
|
|
|
* Perform verbose error reporting if not NULL. Initialized in case of
|
|
|
|
* error only.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, otherwise negative errno value.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_action_update(struct rte_eth_dev *dev,
|
|
|
|
struct rte_flow_action_handle *handle,
|
|
|
|
const void *update,
|
|
|
|
struct rte_flow_error *err)
|
|
|
|
{
|
|
|
|
return flow_hw_action_handle_update(dev, UINT32_MAX, NULL, handle,
|
|
|
|
update, NULL, err);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
flow_hw_action_query(struct rte_eth_dev *dev,
|
|
|
|
const struct rte_flow_action_handle *handle, void *data,
|
|
|
|
struct rte_flow_error *error)
|
|
|
|
{
|
|
|
|
uint32_t act_idx = (uint32_t)(uintptr_t)handle;
|
|
|
|
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_COUNT:
|
|
|
|
return flow_hw_query_counter(dev, act_idx, data, error);
|
2022-10-20 15:41:44 +00:00
|
|
|
case MLX5_INDIRECT_ACTION_TYPE_CT:
|
|
|
|
return flow_hw_conntrack_query(dev, act_idx, data, error);
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
default:
|
|
|
|
return flow_dv_action_query(dev, handle, data, error);
|
|
|
|
}
|
2022-02-24 13:40:50 +00:00
|
|
|
}
|
|
|
|
|
2022-02-24 13:40:41 +00:00
|
|
|
const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
|
|
|
|
.info_get = flow_hw_info_get,
|
|
|
|
.configure = flow_hw_configure,
|
2022-10-20 15:41:41 +00:00
|
|
|
.pattern_validate = flow_hw_pattern_validate,
|
2022-02-24 13:40:42 +00:00
|
|
|
.pattern_template_create = flow_hw_pattern_template_create,
|
|
|
|
.pattern_template_destroy = flow_hw_pattern_template_destroy,
|
2022-10-20 15:41:41 +00:00
|
|
|
.actions_validate = flow_hw_actions_validate,
|
2022-02-24 13:40:43 +00:00
|
|
|
.actions_template_create = flow_hw_actions_template_create,
|
|
|
|
.actions_template_destroy = flow_hw_actions_template_destroy,
|
2022-10-20 15:41:40 +00:00
|
|
|
.template_table_create = flow_hw_template_table_create,
|
2022-02-24 13:40:44 +00:00
|
|
|
.template_table_destroy = flow_hw_table_destroy,
|
2022-02-24 13:40:45 +00:00
|
|
|
.async_flow_create = flow_hw_async_flow_create,
|
|
|
|
.async_flow_destroy = flow_hw_async_flow_destroy,
|
|
|
|
.pull = flow_hw_pull,
|
|
|
|
.push = flow_hw_push,
|
2022-02-24 13:40:50 +00:00
|
|
|
.async_action_create = flow_hw_action_handle_create,
|
|
|
|
.async_action_destroy = flow_hw_action_handle_destroy,
|
|
|
|
.async_action_update = flow_hw_action_handle_update,
|
|
|
|
.action_validate = flow_dv_action_validate,
|
net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-20 15:41:42 +00:00
|
|
|
.action_create = flow_hw_action_create,
|
|
|
|
.action_destroy = flow_hw_action_destroy,
|
|
|
|
.action_update = flow_hw_action_update,
|
|
|
|
.action_query = flow_hw_action_query,
|
|
|
|
.query = flow_hw_query,
|
2022-02-24 13:40:41 +00:00
|
|
|
};
|
|
|
|
|
2022-10-20 15:41:39 +00:00
|
|
|
/**
|
|
|
|
* Creates a control flow using flow template API on @p proxy_dev device,
|
|
|
|
* on behalf of @p owner_dev device.
|
|
|
|
*
|
|
|
|
* This function uses locks internally to synchronize access to the
|
|
|
|
* flow queue.
|
|
|
|
*
|
|
|
|
* Created flow is stored in private list associated with @p proxy_dev device.
|
|
|
|
*
|
|
|
|
* @param owner_dev
|
|
|
|
* Pointer to Ethernet device on behalf of which flow is created.
|
|
|
|
* @param proxy_dev
|
|
|
|
* Pointer to Ethernet device on which flow is created.
|
|
|
|
* @param table
|
|
|
|
* Pointer to flow table.
|
|
|
|
* @param items
|
|
|
|
* Pointer to flow rule items.
|
|
|
|
* @param item_template_idx
|
|
|
|
* Index of an item template associated with @p table.
|
|
|
|
* @param actions
|
|
|
|
* Pointer to flow rule actions.
|
|
|
|
* @param action_template_idx
|
|
|
|
* Index of an action template associated with @p table.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, negative errno value otherwise and rte_errno set.
|
|
|
|
*/
|
|
|
|
static __rte_unused int
|
|
|
|
flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev,
|
|
|
|
struct rte_eth_dev *proxy_dev,
|
|
|
|
struct rte_flow_template_table *table,
|
|
|
|
struct rte_flow_item items[],
|
|
|
|
uint8_t item_template_idx,
|
|
|
|
struct rte_flow_action actions[],
|
|
|
|
uint8_t action_template_idx)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = proxy_dev->data->dev_private;
|
2022-10-20 15:41:41 +00:00
|
|
|
uint32_t queue = CTRL_QUEUE_ID(priv);
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_op_attr op_attr = {
|
|
|
|
.postpone = 0,
|
|
|
|
};
|
|
|
|
struct rte_flow *flow = NULL;
|
|
|
|
struct mlx5_hw_ctrl_flow *entry = NULL;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
rte_spinlock_lock(&priv->hw_ctrl_lock);
|
|
|
|
entry = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_SYS, sizeof(*entry),
|
|
|
|
0, SOCKET_ID_ANY);
|
|
|
|
if (!entry) {
|
|
|
|
DRV_LOG(ERR, "port %u not enough memory to create control flows",
|
|
|
|
proxy_dev->data->port_id);
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
ret = -rte_errno;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
flow = flow_hw_async_flow_create(proxy_dev, queue, &op_attr, table,
|
|
|
|
items, item_template_idx,
|
|
|
|
actions, action_template_idx,
|
|
|
|
NULL, NULL);
|
|
|
|
if (!flow) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to enqueue create control"
|
|
|
|
" flow operation", proxy_dev->data->port_id);
|
|
|
|
ret = -rte_errno;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
ret = flow_hw_push(proxy_dev, queue, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to drain control flow queue",
|
|
|
|
proxy_dev->data->port_id);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
ret = __flow_hw_pull_comp(proxy_dev, queue, 1, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to insert control flow",
|
|
|
|
proxy_dev->data->port_id);
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
ret = -rte_errno;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
entry->owner_dev = owner_dev;
|
|
|
|
entry->flow = flow;
|
|
|
|
LIST_INSERT_HEAD(&priv->hw_ctrl_flows, entry, next);
|
|
|
|
rte_spinlock_unlock(&priv->hw_ctrl_lock);
|
|
|
|
return 0;
|
|
|
|
error:
|
|
|
|
if (entry)
|
|
|
|
mlx5_free(entry);
|
|
|
|
rte_spinlock_unlock(&priv->hw_ctrl_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroys a control flow @p flow using flow template API on @p dev device.
|
|
|
|
*
|
|
|
|
* This function uses locks internally to synchronize access to the
|
|
|
|
* flow queue.
|
|
|
|
*
|
|
|
|
* If the @p flow is stored on any private list/pool, then caller must free up
|
|
|
|
* the relevant resources.
|
|
|
|
*
|
|
|
|
* @param dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
* @param flow
|
|
|
|
* Pointer to flow rule.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, non-zero value otherwise.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_destroy_ctrl_flow(struct rte_eth_dev *dev, struct rte_flow *flow)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
2022-10-20 15:41:41 +00:00
|
|
|
uint32_t queue = CTRL_QUEUE_ID(priv);
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_op_attr op_attr = {
|
|
|
|
.postpone = 0,
|
|
|
|
};
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
rte_spinlock_lock(&priv->hw_ctrl_lock);
|
|
|
|
ret = flow_hw_async_flow_destroy(dev, queue, &op_attr, flow, NULL, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to enqueue destroy control"
|
|
|
|
" flow operation", dev->data->port_id);
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
ret = flow_hw_push(dev, queue, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to drain control flow queue",
|
|
|
|
dev->data->port_id);
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
ret = __flow_hw_pull_comp(dev, queue, 1, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "port %u failed to destroy control flow",
|
|
|
|
dev->data->port_id);
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
ret = -rte_errno;
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
exit:
|
|
|
|
rte_spinlock_unlock(&priv->hw_ctrl_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroys control flows created on behalf of @p owner_dev device.
|
|
|
|
*
|
|
|
|
* @param owner_dev
|
|
|
|
* Pointer to Ethernet device owning control flows.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, otherwise negative error code is returned and
|
|
|
|
* rte_errno is set.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *owner_dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *owner_priv = owner_dev->data->dev_private;
|
|
|
|
struct rte_eth_dev *proxy_dev;
|
|
|
|
struct mlx5_priv *proxy_priv;
|
|
|
|
struct mlx5_hw_ctrl_flow *cf;
|
|
|
|
struct mlx5_hw_ctrl_flow *cf_next;
|
|
|
|
uint16_t owner_port_id = owner_dev->data->port_id;
|
|
|
|
uint16_t proxy_port_id = owner_dev->data->port_id;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (owner_priv->sh->config.dv_esw_en) {
|
|
|
|
if (rte_flow_pick_transfer_proxy(owner_port_id, &proxy_port_id, NULL)) {
|
|
|
|
DRV_LOG(ERR, "Unable to find proxy port for port %u",
|
|
|
|
owner_port_id);
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
return -rte_errno;
|
|
|
|
}
|
|
|
|
proxy_dev = &rte_eth_devices[proxy_port_id];
|
|
|
|
proxy_priv = proxy_dev->data->dev_private;
|
|
|
|
} else {
|
|
|
|
proxy_dev = owner_dev;
|
|
|
|
proxy_priv = owner_priv;
|
|
|
|
}
|
|
|
|
cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows);
|
|
|
|
while (cf != NULL) {
|
|
|
|
cf_next = LIST_NEXT(cf, next);
|
|
|
|
if (cf->owner_dev == owner_dev) {
|
|
|
|
ret = flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow);
|
|
|
|
if (ret) {
|
|
|
|
rte_errno = ret;
|
|
|
|
return -ret;
|
|
|
|
}
|
|
|
|
LIST_REMOVE(cf, next);
|
|
|
|
mlx5_free(cf);
|
|
|
|
}
|
|
|
|
cf = cf_next;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Destroys all control flows created on @p dev device.
|
|
|
|
*
|
|
|
|
* @param owner_dev
|
|
|
|
* Pointer to Ethernet device.
|
|
|
|
*
|
|
|
|
* @return
|
|
|
|
* 0 on success, otherwise negative error code is returned and
|
|
|
|
* rte_errno is set.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_hw_ctrl_flow *cf;
|
|
|
|
struct mlx5_hw_ctrl_flow *cf_next;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
cf = LIST_FIRST(&priv->hw_ctrl_flows);
|
|
|
|
while (cf != NULL) {
|
|
|
|
cf_next = LIST_NEXT(cf, next);
|
|
|
|
ret = flow_hw_destroy_ctrl_flow(dev, cf->flow);
|
|
|
|
if (ret) {
|
|
|
|
rte_errno = ret;
|
|
|
|
return -ret;
|
|
|
|
}
|
|
|
|
LIST_REMOVE(cf, next);
|
|
|
|
mlx5_free(cf);
|
|
|
|
cf = cf_next;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct rte_flow_item_ethdev port_spec = {
|
|
|
|
.port_id = MLX5_REPRESENTED_PORT_ESW_MGR,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_ethdev port_mask = {
|
|
|
|
.port_id = MLX5_REPRESENTED_PORT_ESW_MGR,
|
|
|
|
};
|
|
|
|
struct rte_flow_item items[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
|
|
|
|
.spec = &port_spec,
|
|
|
|
.mask = &port_mask,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_action_modify_field modify_field = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = RTE_FLOW_FIELD_VALUE,
|
|
|
|
},
|
|
|
|
.width = 1,
|
|
|
|
};
|
2022-10-20 15:41:39 +00:00
|
|
|
struct rte_flow_action_jump jump = {
|
2022-10-20 15:41:40 +00:00
|
|
|
.group = 1,
|
2022-10-20 15:41:39 +00:00
|
|
|
};
|
|
|
|
struct rte_flow_action actions[] = {
|
2022-10-20 15:41:40 +00:00
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &modify_field,
|
|
|
|
},
|
2022-10-20 15:41:39 +00:00
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_JUMP,
|
|
|
|
.conf = &jump,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
MLX5_ASSERT(priv->master);
|
|
|
|
if (!priv->dr_ctx ||
|
|
|
|
!priv->hw_esw_sq_miss_root_tbl)
|
|
|
|
return 0;
|
|
|
|
return flow_hw_create_ctrl_flow(dev, dev,
|
|
|
|
priv->hw_esw_sq_miss_root_tbl,
|
|
|
|
items, 0, actions, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq)
|
|
|
|
{
|
|
|
|
uint16_t port_id = dev->data->port_id;
|
2022-10-20 15:41:40 +00:00
|
|
|
struct rte_flow_item_tag reg_c0_spec = {
|
|
|
|
.index = (uint8_t)REG_C_0,
|
|
|
|
};
|
|
|
|
struct rte_flow_item_tag reg_c0_mask = {
|
|
|
|
.index = 0xff,
|
|
|
|
};
|
2022-10-20 15:41:39 +00:00
|
|
|
struct mlx5_rte_flow_item_sq queue_spec = {
|
|
|
|
.queue = txq,
|
|
|
|
};
|
|
|
|
struct mlx5_rte_flow_item_sq queue_mask = {
|
|
|
|
.queue = UINT32_MAX,
|
|
|
|
};
|
|
|
|
struct rte_flow_item items[] = {
|
2022-10-20 15:41:40 +00:00
|
|
|
{
|
|
|
|
.type = (enum rte_flow_item_type)
|
|
|
|
MLX5_RTE_FLOW_ITEM_TYPE_TAG,
|
|
|
|
.spec = ®_c0_spec,
|
|
|
|
.mask = ®_c0_mask,
|
|
|
|
},
|
2022-10-20 15:41:39 +00:00
|
|
|
{
|
|
|
|
.type = (enum rte_flow_item_type)
|
|
|
|
MLX5_RTE_FLOW_ITEM_TYPE_SQ,
|
|
|
|
.spec = &queue_spec,
|
|
|
|
.mask = &queue_mask,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
struct rte_flow_action_ethdev port = {
|
|
|
|
.port_id = port_id,
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
|
|
|
|
.conf = &port,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
struct rte_eth_dev *proxy_dev;
|
|
|
|
struct mlx5_priv *proxy_priv;
|
|
|
|
uint16_t proxy_port_id = dev->data->port_id;
|
2022-10-20 15:41:40 +00:00
|
|
|
uint32_t marker_bit;
|
2022-10-20 15:41:39 +00:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
RTE_SET_USED(txq);
|
|
|
|
ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
proxy_dev = &rte_eth_devices[proxy_port_id];
|
|
|
|
proxy_priv = proxy_dev->data->dev_private;
|
|
|
|
if (!proxy_priv->dr_ctx)
|
|
|
|
return 0;
|
|
|
|
if (!proxy_priv->hw_esw_sq_miss_root_tbl ||
|
|
|
|
!proxy_priv->hw_esw_sq_miss_tbl) {
|
|
|
|
DRV_LOG(ERR, "port %u proxy port %u was configured but default"
|
|
|
|
" flow tables are not created",
|
|
|
|
port_id, proxy_port_id);
|
|
|
|
rte_errno = ENOMEM;
|
|
|
|
return -rte_errno;
|
|
|
|
}
|
2022-10-20 15:41:40 +00:00
|
|
|
marker_bit = flow_hw_usable_lsb_vport_mask(proxy_priv);
|
|
|
|
if (!marker_bit) {
|
|
|
|
DRV_LOG(ERR, "Unable to set up control flow in SQ miss table");
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
return -rte_errno;
|
|
|
|
}
|
|
|
|
reg_c0_spec.data = marker_bit;
|
|
|
|
reg_c0_mask.data = marker_bit;
|
2022-10-20 15:41:39 +00:00
|
|
|
return flow_hw_create_ctrl_flow(dev, proxy_dev,
|
|
|
|
proxy_priv->hw_esw_sq_miss_tbl,
|
|
|
|
items, 0, actions, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
uint16_t port_id = dev->data->port_id;
|
|
|
|
struct rte_flow_item_ethdev port_spec = {
|
|
|
|
.port_id = port_id,
|
|
|
|
};
|
|
|
|
struct rte_flow_item items[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
|
|
|
|
.spec = &port_spec,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
struct rte_flow_action_jump jump = {
|
|
|
|
.group = 1,
|
|
|
|
};
|
|
|
|
struct rte_flow_action actions[] = {
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_JUMP,
|
|
|
|
.conf = &jump,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
struct rte_eth_dev *proxy_dev;
|
|
|
|
struct mlx5_priv *proxy_priv;
|
|
|
|
uint16_t proxy_port_id = dev->data->port_id;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL);
|
|
|
|
if (ret) {
|
|
|
|
DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
proxy_dev = &rte_eth_devices[proxy_port_id];
|
|
|
|
proxy_priv = proxy_dev->data->dev_private;
|
|
|
|
if (!proxy_priv->dr_ctx)
|
|
|
|
return 0;
|
|
|
|
if (!proxy_priv->hw_esw_zero_tbl) {
|
|
|
|
DRV_LOG(ERR, "port %u proxy port %u was configured but default"
|
|
|
|
" flow tables are not created",
|
|
|
|
port_id, proxy_port_id);
|
|
|
|
rte_errno = EINVAL;
|
|
|
|
return -rte_errno;
|
|
|
|
}
|
|
|
|
return flow_hw_create_ctrl_flow(dev, proxy_dev,
|
|
|
|
proxy_priv->hw_esw_zero_tbl,
|
|
|
|
items, 0, actions, 0);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:40 +00:00
|
|
|
int
|
|
|
|
mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct rte_flow_item_eth promisc = {
|
|
|
|
.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
|
|
|
|
.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
|
|
|
|
.type = 0,
|
|
|
|
};
|
|
|
|
struct rte_flow_item eth_all[] = {
|
|
|
|
[0] = {
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_ETH,
|
|
|
|
.spec = &promisc,
|
|
|
|
.mask = &promisc,
|
|
|
|
},
|
|
|
|
[1] = {
|
|
|
|
.type = RTE_FLOW_ITEM_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
struct rte_flow_action_modify_field mreg_action = {
|
|
|
|
.operation = RTE_FLOW_MODIFY_SET,
|
|
|
|
.dst = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_C_1,
|
|
|
|
},
|
|
|
|
.src = {
|
|
|
|
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
|
|
|
|
.level = REG_A,
|
|
|
|
},
|
|
|
|
.width = 32,
|
|
|
|
};
|
|
|
|
struct rte_flow_action copy_reg_action[] = {
|
|
|
|
[0] = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
|
|
|
|
.conf = &mreg_action,
|
|
|
|
},
|
|
|
|
[1] = {
|
|
|
|
.type = RTE_FLOW_ACTION_TYPE_END,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
MLX5_ASSERT(priv->master);
|
|
|
|
if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl)
|
|
|
|
return 0;
|
|
|
|
return flow_hw_create_ctrl_flow(dev, dev,
|
|
|
|
priv->hw_tx_meta_cpy_tbl,
|
|
|
|
eth_all, 0, copy_reg_action, 0);
|
|
|
|
}
|
|
|
|
|
2022-10-20 15:41:41 +00:00
|
|
|
void
|
|
|
|
mlx5_flow_meter_uninit(struct rte_eth_dev *dev)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
|
|
|
|
if (priv->mtr_policy_arr) {
|
|
|
|
mlx5_free(priv->mtr_policy_arr);
|
|
|
|
priv->mtr_policy_arr = NULL;
|
|
|
|
}
|
|
|
|
if (priv->mtr_profile_arr) {
|
|
|
|
mlx5_free(priv->mtr_profile_arr);
|
|
|
|
priv->mtr_profile_arr = NULL;
|
|
|
|
}
|
|
|
|
if (priv->mtr_bulk.aso) {
|
|
|
|
mlx5_free(priv->mtr_bulk.aso);
|
|
|
|
priv->mtr_bulk.aso = NULL;
|
|
|
|
priv->mtr_bulk.size = 0;
|
|
|
|
mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER);
|
|
|
|
}
|
|
|
|
if (priv->mtr_bulk.action) {
|
|
|
|
mlx5dr_action_destroy(priv->mtr_bulk.action);
|
|
|
|
priv->mtr_bulk.action = NULL;
|
|
|
|
}
|
|
|
|
if (priv->mtr_bulk.devx_obj) {
|
|
|
|
claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj));
|
|
|
|
priv->mtr_bulk.devx_obj = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
mlx5_flow_meter_init(struct rte_eth_dev *dev,
|
|
|
|
uint32_t nb_meters,
|
|
|
|
uint32_t nb_meter_profiles,
|
|
|
|
uint32_t nb_meter_policies)
|
|
|
|
{
|
|
|
|
struct mlx5_priv *priv = dev->data->dev_private;
|
|
|
|
struct mlx5_devx_obj *dcs = NULL;
|
|
|
|
uint32_t log_obj_size;
|
|
|
|
int ret = 0;
|
|
|
|
int reg_id;
|
|
|
|
struct mlx5_aso_mtr *aso;
|
|
|
|
uint32_t i;
|
|
|
|
struct rte_flow_error error;
|
|
|
|
|
|
|
|
if (!nb_meters || !nb_meter_profiles || !nb_meter_policies) {
|
|
|
|
ret = ENOTSUP;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter configuration is invalid.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (!priv->mtr_en || !priv->sh->meter_aso_en) {
|
|
|
|
ret = ENOTSUP;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter ASO is not supported.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
priv->mtr_config.nb_meters = nb_meters;
|
|
|
|
if (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER)) {
|
|
|
|
ret = ENOMEM;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter ASO queue allocation failed.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
log_obj_size = rte_log2_u32(nb_meters >> 1);
|
|
|
|
dcs = mlx5_devx_cmd_create_flow_meter_aso_obj
|
|
|
|
(priv->sh->cdev->ctx, priv->sh->cdev->pdn,
|
|
|
|
log_obj_size);
|
|
|
|
if (!dcs) {
|
|
|
|
ret = ENOMEM;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter ASO object allocation failed.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
priv->mtr_bulk.devx_obj = dcs;
|
|
|
|
reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL);
|
|
|
|
if (reg_id < 0) {
|
|
|
|
ret = ENOTSUP;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter register is not available.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
priv->mtr_bulk.action = mlx5dr_action_create_aso_meter
|
|
|
|
(priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs,
|
|
|
|
reg_id - REG_C_0, MLX5DR_ACTION_FLAG_HWS_RX |
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_TX |
|
|
|
|
MLX5DR_ACTION_FLAG_HWS_FDB);
|
|
|
|
if (!priv->mtr_bulk.action) {
|
|
|
|
ret = ENOMEM;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter action creation failed.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO,
|
|
|
|
sizeof(struct mlx5_aso_mtr) * nb_meters,
|
|
|
|
RTE_CACHE_LINE_SIZE,
|
|
|
|
SOCKET_ID_ANY);
|
|
|
|
if (!priv->mtr_bulk.aso) {
|
|
|
|
ret = ENOMEM;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter bulk ASO allocation failed.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
priv->mtr_bulk.size = nb_meters;
|
|
|
|
aso = priv->mtr_bulk.aso;
|
|
|
|
for (i = 0; i < priv->mtr_bulk.size; i++) {
|
|
|
|
aso->type = ASO_METER_DIRECT;
|
|
|
|
aso->state = ASO_METER_WAIT;
|
|
|
|
aso->offset = i;
|
|
|
|
aso++;
|
|
|
|
}
|
|
|
|
priv->mtr_config.nb_meter_profiles = nb_meter_profiles;
|
|
|
|
priv->mtr_profile_arr =
|
|
|
|
mlx5_malloc(MLX5_MEM_ZERO,
|
|
|
|
sizeof(struct mlx5_flow_meter_profile) *
|
|
|
|
nb_meter_profiles,
|
|
|
|
RTE_CACHE_LINE_SIZE,
|
|
|
|
SOCKET_ID_ANY);
|
|
|
|
if (!priv->mtr_profile_arr) {
|
|
|
|
ret = ENOMEM;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter profile allocation failed.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
priv->mtr_config.nb_meter_policies = nb_meter_policies;
|
|
|
|
priv->mtr_policy_arr =
|
|
|
|
mlx5_malloc(MLX5_MEM_ZERO,
|
|
|
|
sizeof(struct mlx5_flow_meter_policy) *
|
|
|
|
nb_meter_policies,
|
|
|
|
RTE_CACHE_LINE_SIZE,
|
|
|
|
SOCKET_ID_ANY);
|
|
|
|
if (!priv->mtr_policy_arr) {
|
|
|
|
ret = ENOMEM;
|
|
|
|
rte_flow_error_set(&error, ENOMEM,
|
|
|
|
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
|
|
|
|
NULL, "Meter policy allocation failed.");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
err:
|
|
|
|
mlx5_flow_meter_uninit(dev);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:
- FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
- SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly
However, there are still some disadvantages:
- performance is limited, we should invoke firmware either to
manage the entire flow, or to handle some internal steering objects
- organizing and preparing flow infrastructure (actions, matchers,
groups, etc.) on the flow inserting is sure to cause slow flow
insertion
- security, exposing the low-level steering entries directly to the
userspace may cause security risks
A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.
In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.
This commit adds the basic driver operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:40:38 +00:00
|
|
|
#endif
|