For the flows with multiple tunnel layers and containing
tunnel decap and modify actions, for example:
... / vxlan / eth / ipv4 proto is 4 / end
actions raw_decap / modify_field / ...
(note: proto 4 means we have the IP-over-IP tunnel in VXLAN payload)
We have added the multiple tunnel layers validation rejecting
the flows like above mentioned one.
The hardware supports the above match combination till the inner
IP-over-IP header (not including the last one), both for IP-over-IPv4
and IP-over-IPv6, so we should not blindly reject. Also, for the modify
actions following the decap we should set the layer attributes correctly.
This patch reverts the below code changes to support the match, and
adjusts the layers update in case of decap with outer tunnel header.
Fixes: fa06906a48 ("net/mlx5: fix IPIP multi-tunnel validation")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support for port_representor item, it will match on traffic
originated from representor port specified in the pattern. This item
is supported in FDB steering domain only (in the flow with transfer
attribute).
For example, below flow will redirect the destination of traffic from
ethdev 1 to ethdev 2.
testpmd> ... pattern eth / port_representor port_id is 1 / end actions
represented_port ethdev_port_id 2 / ...
To handle abovementioned item, Tx queue matching is added in the driver,
and the flow will be expanded to number of the Tx queues. If the spec of
port_representor is NULL, the flow will not be expanded and match on
traffic from any representor port.
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds support for fdb_def_rule_en device argument to HW
Steering, which controls:
- the creation of the default FDB jump flow rule.
- the ability of the user to create transfer flow rules in the root
table.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The queue based rte_flow_async_action_* functions work the same as
queue based async flow functions. The operations can be pushed
asynchronously, and so is the pull.
This commit adds the async action missing push and pull support.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support for AGE action for HW steering.
This patch includes:
1. Add new structures to manage aging.
2. Initialize all of them in configure function.
3. Implement per second aging check using CNT background thread.
4. Enable AGE action in flow create/destroy operations.
5. Implement a queue-based function to report aged flow rules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add the ability to create an indirect action handle for METER_MARK.
It allows sharing one Meter between several different actions.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit adds the support of connection tracking to HW steering as
SW steering did before.
The difference from SW steering implementation is that it takes
advantage of HW steering bulk action allocation support, in HW
steering only one single CT pool is needed.
An indexed pool is introduced to record allocated actions from bulk and
CT action state etc. Once one CT action is allocated from bulk, one
indexed object will also be allocated from the indexed pool, similar to
deallocating. That makes mlx5_aso_ct_action can also be managed by that
indexed pool, no need to be reserved from mlx5_aso_ct_pool. The single
CT pool is also saved to mlx5_aso_ct_action struct directly.
The ASO operation functions are shared with SW steering implementation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The new mode 4 of devarg "dv_xmeta_en" is added for HWS only. In this
mode, the Rx / Tx metadata with 32b width copy between FDB and NIC is
supported.
The mark is only supported in NIC and there is no copy supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch implements creating and caching of port action for use with
HW Steering FDB flows.
Actions are created on flow template API configuration and created
only on the port designated as the master. Attaching and detaching ports
in the same switching domain causes an update to the port actions cache
by, respectively, creating and destroying actions.
A new devarg fdb_def_rule_en is being added and it's used to control
the default dedicated E-Switch rules that are created by the PMD
implicitly or not, and PMD sets this value to 1 by default.
If set to 0, the default E-Switch rule will not be created and the user
can create the specific E-Switch rules on the root table if needed.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch introduces support for modify_field rte_flow actions in HWS
mode that includes:
- Ingress and egress domains,
- SET and ADD operations,
- usage of arbitrary bit offsets and widths for packet and metadata
fields.
This is implemented in two phases:
1. On flow table creation the hardware commands are generated, based
on rte_flow action templates, and stored alongside action template.
2. On flow rule creation/queueing the hardware commands are updated with
values provided by the user. Any masks over immediate values, provided
in action templates, are applied to these values before enqueueing rules
for creation.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In the flow_dv_hashfields_set() function, while item_flags was 0,
the code went directly to the first if and the else case would
never have a chance to be checked. This caused the IPv6 and TCP hash
fields in the else case would never be set.
This commit adds the dedicated HW steering hash field set function
to generate the RSS hash fields.
Fixes: 3a2f674b6a ("net/mlx5: add queue and RSS HW steering action")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This provides shared item tranlsation code for hardware
steering root table flows as they still work under FW steering mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
This split the item matcher and value translation to
make the code reusable for the new steering mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
This splits flow item translation code to a dedicated function
to share the item translation code with hardware steering mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Introduce mlx5_get_send_to_kernel_priority() function which returns
value of priority which must be used to jump back to table 0 in order
to send traffic to kernel. This function returns lowest priority.
Add flow_dv_translate_action_send_to_kernel() function which
will allocate rdma-core send_to_kernel action object.
Called from flow_dv_translate().
Fail translation of RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL action in
HW steering.
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add new structure mlx5_send_to_kernel_action which will hold
together allocated action resource and a reference to used table.
A new structure member of this type added to struct mlx5_dev_ctx_shared.
The member will be initialized upon first created send_to_kernel
action and will be reused for all future actions of this type.
Release of these resources will be done when all shared DR
resources are being released in mlx5_os_free_shared_dr().
Change function flow_dv_tbl_resource_release() from
static to external.
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add new mlx5 action flag MLX5_FLOW_ACTION_SEND_TO_KERNEL.
Add element MLX5_FLOW_FATE_SEND_TO_KERNEL in enum mlx5_flow_fate_type.
For that purpose field 'fate_action' in structure mlx5_flow_handle must be
expanded from 3 bits to 4 bits.
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If there's no param in represented_port item, it will be treated as
matching all ports by default. But there's some limitation when using it
with meter hierarchy.
This patch adds the limitation that when matching all ports, the meter
hierarchy should not contain any meter having drop count.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
There is a new item type represented_port, and currently it will fail
when using meter hierarchy in flow using represented_port match.
This patch fixes this fail by adding support for represented_port item
in meter hierarchy flow split.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The driver splits the flow with sample action into two sub-flows,
sub prefix flow and sub suffix flow.
In the case of tunnel flow including a decap action, the driver should
translate the inner as outer for actions coming after the decap action.
In the case of flow splitting, the packet layers, used to detect the
attributes, are inherited from the prefix flow to the suffix flow but
the driver wrongly didn't handle the decap adjustment and the inner
layers didn't shift to the outer.
This patch adjusts the inherited layers in case of decap.
Fixes: 6e77151286 ("net/mlx5: fix match information in meter")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
ESP is one of IPSec protocols over both IPv4 and IPv6 and is considered
a tunnel layer that cannot be followed by any other layer. Taking that
into consideration, ESP is considered as a 4 layer.
Not defining ESP's priority will make it match with the same priority as
its prior IP layer, which has a layer 3 priority. This will lead to
issues in matching and will match the packet with the first matching
rule even if it doesn't have an esp layer in its pattern, disregarding
any following rules that could have an esp item and can be actually
a more accurate match since it will have a longer matching criterion.
This is fixed by defining the priority for the ESP item to have a
layer 4 priority, making the match be for the rule with the more
accurate and longer matching criteria.
Fixes: 18ca4a4ec7 ("net/mlx5: support ESP SPI match and RSS hash")
Cc: stable@dpdk.org
Signed-off-by: Bassam Zaid AlKilani <bzalkilani@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The pci bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.
While at it, cleanup the code:
- fix indentation,
- remove unneeded reference to bus specific singleton object,
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Negative integrity item refers to condition when the item value mask
is set, but value spec is cleared:
... integrity value mask l4_ok value spec 0 ...
ethdev library defines integrity bits `l3_ok` and `l4_ok` as accumulators
for all hardware L3 and L4 integrity verifications respectfully.
Hardware `l3_ok` and `l4_ok` integrity bits refer to L3 and L4
network headers only.
Integrity bits `l3_ok` and `l4_ok` are not compatible between
ethdev library and hardware.
PMD translations for ethdev `l3_ok` are:
IPv4: `l3_ok` and `l3_csum_ok`
IPv6: `l3_ok`
ethdev `l4_ok` is translated into PMD `l4_ok` and `l4_csum_ok` bits.
Positive IPv4 `l3_ok` flow item configuration is translated into
a single matcher that AND corresponding hardware bits.
Negative IPv4 `l3_ok` is translated into 2 hardware conditions where
each condition probes a single integrity bit:
ethdev::l3_ok is 0 => MLX5::l3_ok is 0 OR MLX5:l3_csum_ok is 0
MLX5 hardware does not do OR condition in flow rule item.
Negative IPv4 `l3_ok` must be translated into 2 flow rules.
Similarly negative ethdev `l4_ok` condition is also translated into 2
hardware rules.
Current PMD roadmap does not allow implicit flow rule split.
Bugzilla ID: 948
Cc: stable@dpdk.org
Suggested-by: Raja Zidane <rzidane@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When meter is used by E-Switch Manager port, there's an error that
cannot get correct port ID.
This patch fixes this by using specific parsing process to get port
ID for E-Switch Manager.
Fixes: 3c481324ba ("net/mlx5: fix meter flow direction check")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
For BF with old FW which doesn't expose the E-Switch Manager vport ID,
E-Switch Manager port matching works correctly only when BF is in
embedded CPU mode.
This patch adds the limitation description.
Fixes: a564038699 ("net/mlx5: support E-Switch manager egress traffic match")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch introduces MODIFY_FIELD action support in meter. User can
create meter policy with MODIFY_FIELD action in green/yellow action.
For example:
testpmd> add port meter policy 0 21 g_actions modify_field op set
dst_type ipv4_ecn src_type value src_value 3 width 2 / ...
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch is to support modify ECN field in IPv4/IPv6 header.
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support for represented_port item in pattern. And if the spec and mask
both are NULL, translate function will not add source vport to matcher.
For example, testpmd starts with PF, VF-rep0 and VF-rep1, below command
will redirect packets from VF0 and VF1 to wire:
testpmd> flow create 0 ingress transfer group 0 pattern eth /
represented_port / end actions represented_port ethdev_id is 0 / end
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
ESP item is not supported on Windows, yet it is expanded from the
expansion graph when trying to create default flow to RSS all packets.
Support ESP item match (without ability to match on SPI field on Windows).
Split ESP validation per OS.
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Yellow meter action support is added in meter hierarchy validation.
If one color uses meter action, the other can only use NULL action
or the same meter action. And only shared meter is supported.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When a hierarchy meter is shared by other ports, it's needed to iterate
all meter policies in hierarchy to create tag rules, to set packet with
next meter ID, which will be used by related meter drop count.
This patch adds the tag rule for yellow support in hierarchy, so both
green/yellow policy flows can set the correct meter ID.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds the support of meter action for yellow meter policy
flow, so can use meter action for both green and yellow policy flows
in meter hierarchy.
Currently must use the same meter within one meter policy. Packets
passing green/yellow policy flow will have previous meter color of
green/yellow in subsequent meter.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In packets with ESP header, the inner IP will be encrypted, and
its fields cannot be used for RSS hashing. So, ESP packets
can be hashed only by the outer IP layer.
So, when using RSS on ESP packets, hashing may not be efficient,
because the fields used by the hash functions are only the outer IPs,
causing all traffic belonging to all tunnels between a given
pair of GWs to land on one core.
Adding the SPI hash field can extend the spreading of IPsec packets.
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When a meter with RSS action being used, there might be several
sub-flows using different sub-policies in the flow splitting stage.
If there's no green action, there's an error that will always use the
same sub-policy for all sub-flows, some resources will be
overwritten and cannot be released, leading assert during port close.
This patch fixes this issue by checking both green and yellow queue
index during getting a blank sub-policy, to avoid the incorrect
resource overwrite.
Fixes: b38a12272b ("net/mlx5: split meter color policy handling")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When an indirect action was created with an RSS action configured to
hash on both source and destination L3 addresses (or L4 ports), it caused
shared hrxq to be configured to hash only on destination address
(or port).
This patch fixes this behavior by refining RSS types specified in
configuration before calculating hash types used for hrxq. Refining RSS
types removes *_SRC_ONLY and *_DST_ONLY flags if they are both set.
Fixes: 212d17b6a6 ("net/mlx5: fix missing shared RSS hash types")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GTP items were ignored during conversion of modify header actions. This
caused modify TTL action to generate a wrong modify header command when
tunnel and inner headers used different IP versions.
This patch adds GTP item handling to modify header action conversion.
Fixes: 04233f36c7 ("net/mlx5: fix layer type in header modify action")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
For now, only one ASO action is supported in a single flow rule.
Flow rule with more than one ASO action should be rejected in the
validation stage.
Flow rule with action non-shared AGE and COUNT together should be
treated as non-ASO because AGE will fall back to use HW counter,
not ASO hit object.
Group 0 will use HW counter for AGE action even if no COUNT action.
This commit will reject patterns (no matter which group if transfer)
like:
1. group 1 pattern... / end actions age / meter / end
2. group 1 pattern... / end actions conntrack / meter / end
3. group 1 pattern... / end actions age / conntrack... / end
If AGE comes together with COUNT in the above patterns, it's allowed.
Fixes: daed4b6e ("net/mlx5: use aging by counter when counter exists")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
A flow rule with sample action will be split into two sub flows,
and a tag action was added implicitly in the sample prefix sub flow,
the reserved metadata regC index was used for this tag action.
The reserved metadata regC was shared with metering action,
for ConnectX-5 trusted device (VF/SF), the reserved metadata regC was
invalid since PF only supported the legacy metering.
This patch adds the checking for the tag index and back to use the
application tag if a failure happened.
Fixes: a9b6ea45be ("net/mlx5: fix tag ID conflict with sample action")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Flow domain and direction was validated when OF_PUSH_VLAN action
appears in flow actions. Flow was rejected whenever this action:
- was used in NIC domain, in ingress direction;
- was used in FDB domain, in ingress direction, on ConnectX-5.
This validation logic rejected a valid case when the OF_PUSH_VLAN
action was used when directing traffic to the hairpin queue,
configured in TX implicit mode.
This patch moves code responsible for OF_PUSH_VLAN validation of
domain and direction from flow_dv_validate_push_vlan() to
flow_dv_validate(). Domain and direction are now validated when either
non-hairpin queue is used or hairpin queue is configured in Tx explicit
mode.
Fixes: 96f85ec489 ("net/mlx5: check VLAN push/pop support")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The meter policy creation doesn't belong to flow rule creation
process, so thread workspace was not initialized and there will be
assert error when using it.
This patch removes the incorrect using of thread workspace in meter
policy creation, and adds a flag in policy instead. When creating
flow rule, can use the flag to set the mark flag in thread workspace.
Fixes: 082becbf1f ("net/mlx5: fix mark enabling for Rx")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When E-Switch mode was enabled, the NIC egress flows was implicitly
appended with source vport to match on. If the metadata register C0
was used to maintain the source vport, it was initialized to zero
on packet steering engine entry, the flow could be hit only
if source vport was zero, the register C0 of the packet was not correct
to match in the TX side, this caused egress flow misses.
This patch:
- removes the implicit source vport match for NIC egress flow.
- rejects the NIC egress flows on the representor ports at validation.
- allows the internal NIC egress flows containing the TX_QUEUE items in
order to not impact hairpins.
Fixes: ce777b147b ("net/mlx5: fix E-Switch flow without port item")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
When both shared and non-shared RSS actions are present in single
flow rule shared RSS index is unset by mistake.
For example:
1. flow indirect_action 0 create action_id 3 ingress action RSS ...
2. set sample_actions 0 mark id 43690 / queue index 0 / end
3. flow create 0 ingress group 107 pattern eth / sample ratio 2
index 0 / indirect 3 / end
PMD translates the indirect action to a shared RSS description at first.
In the split prefix flow, RSS->shared_RSS is unset when translating
sample queue action, the subfix flow will treat the RSS as non-shared.
Fixes: 8e61555657 ("net/mlx5: fix shared RSS and mark actions combination")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
On TCP/IP-based layered network, ICMP is considered and implemented
as part of layer 3 IP protocol. Actually, it is a user of the IP
protocol and must be encapsulated within IP packets. There is no
layer 4 protocol over ICMP.
The rule with layer 4 should be matched prior to the rule only with
layer 3 pattern when:
1. Both rules are created in the same table
2. Both rules could be hit
3. The rules has the same priority
The steering result of the packet is indeterministic if there are
rules with patterns IP and IP+ICMP in the same table with the same
priority. Like TCP / UDP, a packet should hit the rule with a longer
matching criterion.
By treating the priority of ICMP/ICMPv6 as a layer 4 priority in the
PMD internally, the IP+ICMP will be hit in prior to IP only.
Fixes: d53aa89aea ("net/mlx5: support matching on ICMP/ICMP6")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reduce flex item flow handle size from 32 bits to 8 bits for each
flow.
The patch will save memory in setups with millions of flows.
Fixes: a23e9b6e3e ("net/mlx5: handle flex item in flows")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The AGE action can be implemented by either counters or ASO mechanism.
ASO is more efficient than generating counters just for the purpose of
aging, so when ASO is supported its use is preferable. On the other
hand, when there is count in the list of actions, the counter is already
generated, and it is best to use it for aging even if ASO is supported.
On the other hand, when the count action is "indirect", it cannot be
used for aging since it may be updated from other flow rules in which it
participates.
Checking whether ASO is supported depends on both the capability of the
device and the flow rule group number, ASO is not supported for group 0.
However, the flow_dv_validate() function only checks the capability and
ignores the group, allowing inadmissible flow rules.
For example, when the device supports ASO and a flow rule is set that
combines an indirect counter with aging for group 0, the rule should be
rejected, but it is created and does not function properly.
This patch updates the counter validation which will also consider the
group number when deciding if there is ASO support.
Fixes: daed4b6e3d ("net/mlx5: use aging by counter when counter exists")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The AGE action can be implemented by either counters or ASO mechanism.
When user ask count action in the flow rule, AGE action is implemented
by the same counter. However, if user ask indirect count action, it
cannot be used for AGE.
The flow_dv_validate() function has a flag named "shared_count" which
indicates whether AGE action validate depends on ASO support or not.
This flag is initialized to false and is updated if there is indirect
count action in the action list.
This flag is mistakenly set within the loop that reads the action list
and in each iteration it is reinitialized to false, regardless of the
existence of an indirect count action in the list.
This patch moves the flag initialization out of the loop.
Fixes: f3191849f2 ("net/mlx5: support flow count action handle")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The table remove callback function is trying to destroy the
matchers list associated with table entries without checking
if the list is valid, which causes null pointer dereference.
Fixed by validating the matchers list before destroying it.
Issue can be reproduced with testpmd on Windows, when you run:
port close all
Fixes: 1872635570 ("net/mlx5: make matcher list thread safe")
Cc: stable@dpdk.org
Signed-off-by: Adham Masarwah <adham@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>