Commit Graph

40 Commits

Author SHA1 Message Date
Dariusz Sosnowski
28ffc4bbab net/mlx5: fix VLAN push action mask iteration
Before this patch, during translation of OF_PUSH_VLAN actions iterator
was moved forward to the position of OF_SET_VLAN_VID or
OF_SET_VLAN_PCP, but masks iterator was not updated.
As a result, the following actions were incorrectly translated,
because iterators were not aligned.

This patch fixes this behavior by properly adjusting masks iterator
alognside actions iterator.

Fixes: 773ca0e91b ("net/mlx5: support VLAN push/pop/modify with HWS")

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-11-20 10:04:42 +01:00
Michael Baum
c10643c4ab net/mlx5: fix error log in async flow destruction
The flow_hw_async_flow_destroy() function fills the error structure in
case of failure.

The error log reported by function is "fail to create rte flow" while
the correct failure is in destruction.

This patch changes the error log to report "fail to destroy rte flow".

Fixes: c40c061a02 ("net/mlx5: add basic flow queue operation")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-11-17 15:47:46 +01:00
Stephen Hemminger
742d8aaa81 drivers/net: remove unnecessary null checks
The function rte_free() already handles NULL argument;
therefore the checks in this code are unnecessary.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2022-11-15 15:04:22 +01:00
Rongwei Liu
f64a79464c net/mlx5: fix marks on Rx packets
If HW Steering is enabled, Rx queues were configured to receive MARKs
when a table with MARK actions was created. After stopping the port,
Rx queue configuration is released, but during starting the port
the mark flag was not updated in the Rx queue configuration.

This patch introduces a reference count on the MARK action and it
increases/decreases per template_table create/destroy.

When the port is stopped, Rx queue configuration is not cleared if
reference count is not zero.

Fixes: 3a2f674b6a ("net/mlx5: add queue and RSS HW steering action")
Cc: stable@dpdk.org

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-11-10 18:15:56 +01:00
Michael Baum
d37435dc3f net/mlx5: assert for enough space in counter rings
There is a by-design assumption in the code that the global counter
rings can contain all the port counters.
So, enqueuing to these global rings should always succeed.

Add assertions to help for debugging this assumption.

In addition, change mlx5_hws_cnt_pool_put() function to return void due
to those assumptions.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
2022-11-10 18:15:45 +01:00
Suanming Mou
d114dbee28 net/mlx5: enable queue flow aging action
As the queue-based aging API has been integrated[1], the flow aging
action support in HWS steering code can be enabled now.

[1]: https://patchwork.dpdk.org/project/dpdk/cover/
20221026214943.3686635-1-michaelba@nvidia.com/

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-11-10 18:15:43 +01:00
Dariusz Sosnowski
9fa7c1cddb net/mlx5: create control flow rules with HWS
This patch adds the creation of control flow rules required to receive
default traffic (based on port configuration) with HWS.

Control flow rules are created on port start and destroyed on port stop.
Handling of destroying these rules was already implemented before that
patch.

Control flow rules are created if and only if flow isolation mode is
disabled and the creation process goes as follows:

- Port configuration is collected into a set of flags. Each flag
  corresponds to a certain Ethernet pattern type, defined by
  mlx5_flow_ctrl_rx_eth_pattern_type enumeration. There is a separate
  flag for VLAN filtering.

- For each possible Ethernet pattern type and:
  - For each possible RSS action configuration:
    - If configuration flags do not match this combination, it is
      omitted.
    - A template table is created using this combination of pattern
      and actions template (templates are fetched from hw_ctrl_rx
      struct stored in the port's private data).
    - Flow rules are created in this table.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:43 +02:00
Dariusz Sosnowski
483181f7b6 net/mlx5: support device control of representor matching
In some E-Switch use cases, applications want to receive all traffic
on a single port. Since currently, flow API does not provide a way to
match traffic forwarded to any port representor, this patch adds
support for controlling representor matching on ingress flow rules.

Representor matching is controlled through a new device argument
repr_matching_en.

- If representor matching is enabled (default setting),
  then each ingress pattern template has an implicit REPRESENTED_PORT
  item added. Flow rules based on this pattern template will match
  the vport associated with the port on which the rule is created.
- If representor matching is disabled, then there will be no implicit
  item added. As a result ingress flow rules will match traffic
  coming to any port, not only the port on which the flow rule is
  created.

Representor matching is enabled by default, to provide an expected
default behavior.

This patch enables egress flow rules on representors when E-Switch is
enabled in the following configurations:

- repr_matching_en=1 and dv_xmeta_en=4
- repr_matching_en=1 and dv_xmeta_en=0
- repr_matching_en=0 and dv_xmeta_en=0

When representor matching is enabled, the following logic is
implemented:

1. Creating an egress template table in group 0 for each port. These
   tables will hold default flow rules defined as follows:

      pattern SQ
      actions MODIFY_FIELD (set available bits in REG_C_0 to
                            vport_meta_tag)
              MODIFY_FIELD (copy REG_A to REG_C_1, only when
                            dv_xmeta_en == 4)
              JUMP (group 1)

2. Egress pattern templates created by an application have an implicit
   MLX5_RTE_FLOW_ITEM_TYPE_TAG item prepended to the pattern, which
   matches available bits of REG_C_0.

3. Egress flow rules created by an application have an implicit
   MLX5_RTE_FLOW_ITEM_TYPE_TAG item prepended to the pattern, which
   matches vport_meta_tag placed in available bits of REG_C_0.

4. Egress template tables created by an application, which are in
   group n, are placed in group n + 1.

5. Items and actions related to META are operating on REG_A when
   dv_xmeta_en == 0 or REG_C_1 when dv_xmeta_en == 4.

When representor matching is disabled and extended metadata is disabled,
no changes to the current logic are required.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:43 +02:00
Dariusz Sosnowski
26e1eaf2da net/mlx5: support device control for E-Switch default rule
This patch adds support for fdb_def_rule_en device argument to HW
Steering, which controls:

- the creation of the default FDB jump flow rule.
- the ability of the user to create transfer flow rules in the root
table.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:43 +02:00
Gregory Etelson
a3778a4784 net/mlx5: support flow integrity in HWS group 0
- Reformat flow integrity item translation for HWS code.
- Support flow integrity bits in HWS group 0.
- Update integrity item translation to match positive semantics only.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:42 +02:00
Suanming Mou
478ba4bbe6 net/mlx5: support async flow action push and pull
The queue based rte_flow_async_action_* functions work the same as
queue based async flow functions. The operations can be pushed
asynchronously, and so is the pull.

This commit adds the async action missing push and pull support.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:42 +02:00
Michael Baum
04a4de756e net/mlx5: support flow age action with HWS
Add support for AGE action for HW steering.
This patch includes:

 1. Add new structures to manage aging.
 2. Initialize all of them in configure function.
 3. Implement per second aging check using CNT background thread.
 4. Enable AGE action in flow create/destroy operations.
 5. Implement a queue-based function to report aged flow rules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:41 +02:00
Alexander Kozyrev
48fbb0e93d net/mlx5: support flow meter mark indirect action with HWS
Add the ability to create an indirect action handle for METER_MARK.
It allows sharing one Meter between several different actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:41 +02:00
Gregory Etelson
773ca0e91b net/mlx5: support VLAN push/pop/modify with HWS
Add PMD implementation for HW steering VLAN push, pop, and modify flow
actions.

HWS VLAN push flow action is triggered by a sequence of mandatory
OF_PUSH_VLAN, OF_SET_VLAN_VID, and optional OF_SET_VLAN_PCP
flow action commands.

The commands must be arranged in the exact order:
OF_PUSH_VLAN / OF_SET_VLAN_VID [ / OF_SET_VLAN_PCP ].

In masked HWS VLAN push flow action template *ALL* the above flow
actions must be masked.

In non-masked HWS VLAN push flow action template *ALL* the above flow
actions must not be masked.

Example:

flow actions_template <port id> create \
actions_template_id <action id> \
template \
  of_push_vlan / \
  of_set_vlan_vid \
  [ / of_set_vlan_pcp  ] / end \
mask \
  of_push_vlan ethertype 0 / \
  of_set_vlan_vid vlan_vid 0 \
  [ / of_set_vlan_pcp vlan_pcp 0 ] / end\

flow actions_template <port id> create \
actions_template_id <action id> \
template \
  of_push_vlan ethertype <E>/ \
  of_set_vlan_vid vlan_vid <VID>\
  [ / of_set_vlan_pcp  <PCP>] / end \
mask \
  of_push_vlan ethertype <type != 0> / \
  of_set_vlan_vid vlan_vid <vid_mask != 0>\
  [ / of_set_vlan_pcp vlan_pcp <pcp_mask != 0> ] / end\

HWS VLAN pop flow action is triggered by OF_POP_VLAN
flow action command.
HWS VLAN pop action template is always non-masked.

Example:

flow actions_template <port id> create \
actions_template_id <action id> \
template of_pop_vlan / end mask of_pop_vlan / end

HWS VLAN VID modify flow action is triggered by a standalone
OF_SET_VLAN_VID flow action command.
HWS VLAN VID modify action template can be ether masked or non-masked.

Example:

flow actions_template <port id> create \
actions_template_id <action id> \
template of_set_vlan_vid / end mask of_set_vlan_vid vlan_vid 0 / end

flow actions_template <port id> create \
actions_template_id <action id> \
template of_set_vlan_vid vlan_vid 0x101 / end \
mask of_set_vlan_vid vlan_vid 0xffff / end

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:41 +02:00
Suanming Mou
463170a7c9 net/mlx5: support connection tracking with HWS
This commit adds the support of connection tracking to HW steering as
SW steering did before.

The difference from SW steering implementation is that it takes
advantage of HW steering bulk action allocation support, in HW
steering only one single CT pool is needed.

An indexed pool is introduced to record allocated actions from bulk and
CT action state etc. Once one CT action is allocated from bulk, one
indexed object will also be allocated from the indexed pool, similar to
deallocating. That makes mlx5_aso_ct_action can also be managed by that
indexed pool, no need to be reserved from mlx5_aso_ct_pool. The single
CT pool is also saved to mlx5_aso_ct_action struct directly.

The ASO operation functions are shared with SW steering implementation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:40 +02:00
Dariusz Sosnowski
f1fecffa88 net/mlx5: support Direct Rules action template API
This patch adapts mlx5 PMD to changes in mlx5dr API regarding the action
templates. It changes the following:

1. Actions template creation:

    - Flow actions types are translated to mlx5dr action types in order
      to create mlx5dr_action_template object.
    - An offset is assigned to each flow action. This offset is used to
      predetermine the action's location in the rule_acts array passed
      on the rule creation.

2. Template table creation:

    - Fixed actions are created and put in the rule_acts cache using
      predetermined offsets
    - mlx5dr matcher is parametrized by action templates bound to
      template table.
    - mlx5dr matcher is configured to optimize rule creation based on
      passed rule indices.

3. Flow rule creation:

    - mlx5dr rule is parametrized by the action template on which these
      rule's actions are based.
    - Rule index hint is provided to mlx5dr.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:40 +02:00
Xiaoyu Min
4d368e1da3 net/mlx5: support flow counter action for HWS
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.

The HW steering's counter pool is based on the rte_ring of zero-copy
variation.

There are two global rte_rings:
1. free_list:
     Store the counters indexes, which are ready for use.
2. wait_reset_list:
     Store the counters indexes, which are just freed from the user and
     need to query the hardware counter to get the reset value before
     this counter can be reused again.

The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.

The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.

The main operations of the counter pool are as follows:

 - Get one counter from the pool:
   1. The user call _get_* API.
   2. If the cache is enabled, dequeue one counter index from the local
      cache:
      2. A: if the dequeued one from the local cache is still in reset
        status (counter's query_gen_when_free is equal to pool's query
        gen):
        I. Flush all counters in the local cache back to global
           wait_reset_list.
        II. Fetch _fetch_sz_ counters into the cache from the global
            free list.
        III. Fetch one counter from the cache.
   3. If the cache is empty, fetch _fetch_sz_ counters from the global
      free list into the cache and fetch one counter from the cache.
 - Free one counter into the pool:
   1. The user calls _put_* API.
   2. Put the counter into the local cache.
   3. If the local cache is full:
      A: Write back all counters above _threshold_ into the global
         wait_reset_list.
      B: Also, write back this counter into the global wait_reset_list.

When the local cache is disabled, _get_/_put_ cache directly from/into
global list.

Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:39 +02:00
Alexander Kozyrev
24865366e4 net/mlx5: support flow meter action for HWS
This commit adds meter action for HWS steering.

HW steering meter is based on ASO. The number of meters will
be used by flows should be specified in advance in the flow
configure API.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:39 +02:00
Bing Zhao
ddb68e4733 net/mlx5: add extended metadata mode for HWS
The new mode 4 of devarg "dv_xmeta_en" is added for HWS only. In this
mode, the Rx / Tx metadata with 32b width copy between FDB and NIC is
supported.

The mark is only supported in NIC and there is no copy supported.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:38 +02:00
Dariusz Sosnowski
1939eb6f66 net/mlx5: support flow port action with HWS
This patch implements creating and caching of port action for use with
HW Steering FDB flows.

Actions are created on flow template API configuration and created
only on the port designated as the master. Attaching and detaching ports
in the same switching domain causes an update to the port actions cache
by, respectively, creating and destroying actions.

A new devarg fdb_def_rule_en is being added and it's used to control
the default dedicated E-Switch rules that are created by the PMD
implicitly or not, and PMD sets this value to 1 by default.

If set to 0, the default E-Switch rule will not be created and the user
can create the specific E-Switch rules on the root table if needed.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:38 +02:00
Suanming Mou
0f4aa72b99 net/mlx5: support flow modify field with HWS
This patch introduces support for modify_field rte_flow actions in HWS
mode that includes:
	- Ingress and egress domains,
	- SET and ADD operations,
	- usage of arbitrary bit offsets and widths for packet and metadata
	  fields.

This is implemented in two phases:
1. On flow table creation the hardware commands are generated, based
   on rte_flow action templates, and stored alongside action template.

2. On flow rule creation/queueing the hardware commands are updated with
   values provided by the user. Any masks over immediate values, provided
   in action templates, are applied to these values before enqueueing rules
   for creation.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:38 +02:00
Suanming Mou
7f6daa490d net/mlx5: add shared header reformat
As the rte_flow_async API defines the action mask with a field value
not being 0 means the action will be used as shared in all the flows
in the table.

The header reformat action with the action mask field not being 0 will
be created as constant shared action. For encapsulation header reformat
action, there are two kinds of encapsulation data, raw_encap_data
and rte_flow_item encap_data. Both of these two kinds of data can be
identified from the action mask conf as constant or not.

Examples:
1. VXLAN encap (encap_data: rte_flow_item)
	action conf (eth/ipv4/udp/vxlan_hdr)

	a. action mask conf (eth/ipv4/udp/vxlan_hdr)
	  - items are constant.
	b. action mask conf (NULL)
	  - items will change.

2. RAW encap (encap_data: raw)
	action conf (raw_data)

	a. action mask conf (not NULL)
	  - encap_data constant.
	b. action mask conf (NULL)
	  - encap_data will change.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:37 +02:00
Suanming Mou
b206c558f7 net/mlx5: fix IPv6 and TCP RSS hash fields
In the flow_dv_hashfields_set() function, while item_flags was 0,
the code went directly to the first if and the else case would
never have a chance to be checked. This caused the IPv6 and TCP hash
fields in the else case would never be set.

This commit adds the dedicated HW steering hash field set function
to generate the RSS hash fields.

Fixes: 3a2f674b6a ("net/mlx5: add queue and RSS HW steering action")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-26 13:33:37 +02:00
Alex Vesker
22681deead net/mlx5/hws: enable hardware steering
Replace stub implementation of HWS with mlx5dr code.

Signed-off-by: Alex Vesker <valex@nvidia.com>
2022-10-26 13:33:36 +02:00
Bing Zhao
8a89038f40 net/mlx5: provide available tag registers
This stores the available tags that can be used by the
application in a global array that will be used to
transfer the TAG item directly from the ID to the REG_C_x
since these can't be changed after startup.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
2022-10-26 13:33:31 +02:00
Dariusz Sosnowski
5bd0e3e671 net/mlx5: add port to metadata conversion
This adds conversion functions between both ethdev port_id and IB
context to internal corresponding tag/mask values.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
2022-10-26 13:33:31 +02:00
Michael Savisko
7f6e276b02 net/mlx5: support flow action send to kernel
Introduce mlx5_get_send_to_kernel_priority() function which returns
value of priority which must be used to jump back to table 0 in order
to send traffic to kernel. This function returns lowest priority.

Add flow_dv_translate_action_send_to_kernel() function which
will allocate rdma-core send_to_kernel action object.
Called from flow_dv_translate().

Fail translation of RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL action in
HW steering.

Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-26 13:33:29 +02:00
Michael Baum
2192599c75 net/mlx5: fix entry size in construct data ipool
The mlx5_action_construct_data structure memory is managed by ipool
named acts_ipool.

The size of one entry in this ipool is mistakenly defined as size of
rte_flow_hw structure.
This size is used to reset in the allocated part. When the size is
incorrect it resets memory that does not belong to it.

This patch defines the correct size.

Fixes: f13fab2392 ("net/mlx5: add flow jump action")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-06-05 17:04:46 +02:00
Suanming Mou
fe3620aaba net/mlx5: add header reformat HW steering action
HW steering header reformat action can work under bulk mode. In
this case, when create the table, bulk size of header reformat
actions will be allocated in low level. Afterwards, when create
flow, just simply specify the action index in the bulk and the
encapsulation data to the action will be enough.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:23 +01:00
Suanming Mou
7ab3962d2d net/mlx5: add indirect HW steering action
HW steering can support indirect action as well. With indirect action,
the flow can be created with more flexible shared RSS action selection.
This will can save the action template with different RSS actions.

This commit adds the flow queue operation callback for:
rte_flow_async_action_handle_create();
rte_flow_async_action_handle_destroy();
rte_flow_async_action_handle_update();

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:23 +01:00
Suanming Mou
1deadfd709 net/mlx5: add HW mark action
The mark action is covered by tag action internally. While it is added
the HW will add a tag to the packet. The mark value can be set as fixed
or dynamic as the action mask indicates.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:22 +01:00
Suanming Mou
3a2f674b6a net/mlx5: add queue and RSS HW steering action
This commit adds the queue and RSS action. Similar to the jump action,
dynamic ones will be added to the action construct list.

Due to the queue and RSS action in template should not be destroyed
during port restart, the actions are created with standalone indirect
table as indirect action does. When port stops, detaches the indirect
table from action, when port starts, attaches the indirect table back
to the action.

One more change is made to accelerate the action creation. Currently
the mlx5_hrxq_get() function returns the object index instead of object
pointer. This introduced an extra converting the index to the object by
calling mlx5_ipool_get() in most of the case. And that extra converting
hurts multi-thread performance since mlx5_ipool_get() uses the global
lock inside. As the hash Rx queue object itself also contains the index,
returns the object directly will achieve better performance without the
global lock.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:22 +01:00
Suanming Mou
f13fab2392 net/mlx5: add flow jump action
Jump action connects different level of flow tables and allows packet
handling in the chain of flows.

A new action construct data struct is also added in this commit to help
to handle not only the dynamic jump action but also for the other
generic dynamic actions. The actions with empty mask configuration means
dynamic action, and the dedicated action will be created with the flow
action configuration during flow creation. In that dynamic action case,
the action will be appended to the table template's action list during
table creation.
When creating the flows, traverse the action list and pick the dynamic
action configuration details from flow actions as the action construct
data struct describes, then create the dedicated dynamic actions.

This commit adds the jump action and the generic dynamic action
construct mechanism.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:21 +01:00
Suanming Mou
08dfff78f2 net/mlx5: add flow flush
In case port is being stopped, all created flows should be flushed.
This commit adds the flow flush helper function.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:20 +01:00
Suanming Mou
c40c061a02 net/mlx5: add basic flow queue operation
The HW steering uses async queue-based flow rules management
mechanism. The matcher and part of the actions have been
prepared during flow table creation. Some remaining actions
will be constructed during flow creation if needed.

A flow postpone attribute bit describes if flow management
should be applied to the HW directly. An extra push function
is provided to force push all the cached flows to the HW.

Once the flow has been applied to the HW, the pull function
will be called to get the queued creation/destruction flows.

The DR rule flow memory is represented in PMD layer instead
of allocating from HW steering layer. While destroying the
flow, the flow rule memory can only be freed after the CQE
received.

The HW queue job descriptor is currently introduced to convey
the flow information and operation type between the flow
insertion/destruction in the pull function.

This commit adds the basic flow queue operation for:
rte_flow_async_create();
rte_flow_async_destroy();
rte_flow_push();
rte_flow_pull();

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:20 +01:00
Suanming Mou
d1559d66ed net/mlx5: add table management
Flow table is a group of flows with the same matching criteria
and the same actions defined for them. The table defines rules
that have the same matching fields but with different matching
values. For example, matching on 5 tuple, the table will be
(IPv4 source + IPv4 dest + s_port + d_port + next_proto)
while the values for each rule will be different.

The templates' relevant matching criteria and action instances
will be created in the table creation and saved in the table.
As table attributes indicate the supported flow number, the flow
memory will also be allocated at the same time.

This commit adds the table management functions.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:19 +01:00
Suanming Mou
836b5c9b5e net/mlx5: add action template management
The action template holds a list of action types that will be
used together on the same rule. The template's actions instances
will be created only when the template bind to the dedicated
group. And the created actions will be saved to each individual
group in order for best performance. The actions in a group will
not be shared with each other unless shared actions are specified.

This commit adds the action template management which stores the
flow action template.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:18 +01:00
Suanming Mou
42431df924 net/mlx5: add pattern template management
The pattern template defines flows that have the same matching
fields but with different matching values.
For example, matching on 5 tuple TCP flow, the template will be
(eth(null) + IPv4(source + dest) + TCP(s_port + d_port) while
the values for each rule will be different.

Due to the pattern template can be used in different domains, the
items will only be cached in pattern template create stage, while
the template is bound to a dedicated table, the HW criteria will
be created and saved to the table. The pattern templates can be
used by multiple tables. But different tables create the same
criteria and will not share the matcher between each other in order
to have better performance.

This commit adds pattern template management.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:18 +01:00
Suanming Mou
b401400db2 net/mlx5: add port flow configuration
The hardware steering is backend to support rte_flow_async API in
mlx5 PMD. The port configuration function creates the queues and
needed flow management resources.

The PMD layer configuration function allocates the queues' context
and per-queue job descriptor pool. The job descriptor pool size
is equal to the queue size, and the job descriptors will be popped
from pool with LIFO strategy to convey the flow information during
flow insertion/destruction. Then, while polling the queued operation
result, the flow information will be extracted from the job descriptor
and the descriptor will be pushed back to the LIFO pool.

The commit creates the flow port queues and the job descriptor pools.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:17 +01:00
Suanming Mou
2b6791506e net/mlx5: introduce hardware steering operation
The Connect-X steering is a lookup hardware mechanism that accesses flow
tables, matches packets to the rules, and performs specified actions.
Historically, mlx5 PMD implements several software engines to manage
steering hardware facility:

   - FW Steering - Verbs/Direct Verbs, uses FW calls to manage flows
   - SW Steering - DevX/mlx5dv, uses WQEs to access table memory directly

However, there are still some disadvantages:

   - performance is limited, we should invoke firmware either to
     manage the entire flow, or to handle some internal steering objects

   - organizing and preparing flow infrastructure (actions, matchers,
     groups, etc.) on the flow inserting is sure to cause slow flow
     insertion

   - security, exposing the low-level steering entries directly to the
     userspace may cause security risks

A new hardware WQE based steering operation with codename "HW Steering"
is going to be introduced to get rid of the security risks. And it will
take advantage of the recently new introduced async queue-based rte_flow
APIs to prepare everything in advance to achieve high insertion rate.

In this new HW steering engine, the original SW steering rte_flow API
will not be supported in the first implementation, only the new async
queue-based flow operations is going to be supported. A new steering
mode parameter for dv_flow_en will be introduced and user will be
able to engage the new steering engine.

This commit adds the basic driver operation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:15 +01:00