The AltiVec header file is defining "vector", except in C++ build.
The keyword "vector" may conflict easily.
As a rule, it is better to use the alternative keyword "__vector",
so we will be able to #undef vector after including AltiVec header.
Later it may become possible to #undef vector in rte_altivec.h
with a compatibility breakage.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
In packets with ESP header, the inner IP will be encrypted, and
its fields cannot be used for RSS hashing. So, ESP packets
can be hashed only by the outer IP layer.
So, when using RSS on ESP packets, hashing may not be efficient,
because the fields used by the hash functions are only the outer IPs,
causing all traffic belonging to all tunnels between a given
pair of GWs to land on one core.
Adding the SPI hash field can extend the spreading of IPsec packets.
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When a meter with RSS action being used, there might be several
sub-flows using different sub-policies in the flow splitting stage.
If there's no green action, there's an error that will always use the
same sub-policy for all sub-flows, some resources will be
overwritten and cannot be released, leading assert during port close.
This patch fixes this issue by checking both green and yellow queue
index during getting a blank sub-policy, to avoid the incorrect
resource overwrite.
Fixes: b38a12272b ("net/mlx5: split meter color policy handling")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The driver wrongly set the LRO configurations to the TIR of the DevX
drop queue even when LRO is not supported.
Actually, the LRO configuration is not relevant to the drop queue at
all.
This causes failure in the initialization of the device, which doesn't
support LRO where the drop queue is created.
Probably, the drop queue creation by DevX missed the fact that LRO is
set by default in the TIR creation function and didn't unset it in the
drop queue case like other cases that unset LRO.
Move the default LRO configuration to unset it and set it only in the
case of all the TIR queues configured with LRO.
Fixes: bc5bee028e ("net/mlx5: create drop queue using DevX")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The mlx5_rx_queue_setup() get LRO offload from user.
When LRO is configured, the LRO flag in rxq_data is set to 1.
This patch adds validation to make sure the LRO is supported.
Fixes: 17ed314 ("net/mlx5: allow LRO per Rx queue")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When an indirect action was created with an RSS action configured to
hash on both source and destination L3 addresses (or L4 ports), it caused
shared hrxq to be configured to hash only on destination address
(or port).
This patch fixes this behavior by refining RSS types specified in
configuration before calculating hash types used for hrxq. Refining RSS
types removes *_SRC_ONLY and *_DST_ONLY flags if they are both set.
Fixes: 212d17b6a6 ("net/mlx5: fix missing shared RSS hash types")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Queue statistics are being continuously updated in Rx/Tx burst
routines while handling traffic. In addition to that, statistics
can be reset (written with zeroes) on statistics reset in other
threads, causing a race condition, which in turn could result in
wrong stats.
The patch provides an approach with reference values, allowing
the actual counters to be writable within Rx/Tx burst threads
only, and updating reference values on stats reset.
Fixes: 87011737b7 ("mlx5: add software counters")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GTP items were ignored during conversion of modify header actions. This
caused modify TTL action to generate a wrong modify header command when
tunnel and inner headers used different IP versions.
This patch adds GTP item handling to modify header action conversion.
Fixes: 04233f36c7 ("net/mlx5: fix layer type in header modify action")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Mlx5Devx library has new API's for setting and getting MTU.
Added new glue functions that wrap the new mlx5devx lib API's.
Implemented the os_ethdev callbacks to use the new glue
functions in Windows.
Signed-off-by: Adham Masarwah <adham@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Support of the set promiscuous modes by calling the new API
In Mlx5DevX Lib.
Added new glue API for Windows which will be used to communicate
with Windows driver to enable/disable PROMISC or ALLMC.
Signed-off-by: Adham Masarwah <adham@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The mlx5_rxq_is_hairpin() function checks whether RxQ type is Hairpin.
It is done by reading a flag in Rx control structure coming from
mlx5_rxq_ctrl_get() function.
The function verifies that the queue index is valid even though it has
been checked within the mlx5_rxq_ctrl_get() function.
This patch removes the redundant check.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The mlx5_rxq_get() function gets RxQ index and return RxQ priv
accordingly.
When it gets an invalid index, it accesses out of array bounds which
might cause undefined behavior.
This patch adds a check for invalid indexes before accessing to array.
Fixes: 0cedf34da7 ("net/mlx5: move Rx queue reference count")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The flow_drv_rxq_flags_set sets the Rx queue flags (Mark/Flag and Tunnel
Ptypes) according to the device flow.
It tries to get the RxQ control structure to update its ptype. However,
external RxQs don't have control structure to update and it may cause a
crash.
This patch add check whether this Queue is external.
Fixes: 311b17e669 ("net/mlx5: support queue/RSS actions for external Rx queue")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In rte_flow, if a counter action is before a meter which has
non-termination policy, the counter value only includes packets not
being dropped.
This patch fixes this issue by differentiating the order of counter and
non-termination meter:
1. counter + meter, counts all packets hitting this flow.
2. meter + counter, only counts packets not being dropped.
Fixes: 51ec04dc7b ("net/mlx5: connect meter policy to created flows")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Users can probe primary or secondary PCIe id when bonding is
configured.
1. -a 0a:00.0,representor=pf[0-1]vf[0-1], PMD probes 5 ports
totally: bonding device plus 4 representor ports.
2. -a 0a:00.1,representor=pf[0-1]vf[0-1], PMD only probes 2
representor ports.
Under the 2nd condition, bonding IB device doesn't have the same
PCIe id and PMD needs to check bonding relationship otherwise
probe failure.
Fixes: 6856efa54e ("net/mlx5: fix PF leak on PCI probing failure")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When txq_inline_max is too large and an mbuf is multi-segment
it may be impossible to inline data and build a valid WQE,
because WQE length would be larger then HW can represent.
It is impossible to detect misconfiguration at startup,
because the condition depends on the mbuf composition.
The check on the data path to prevent the error
treated the length limit as expressed in 64B units,
while the calculated length and limit are in 16B units.
Fix the condition to avoid subsequent TxQ failure and recovery.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Multi-Packet Rx queue uses PMD-managed buffers to store packets.
These buffers are externally attached to user mbufs.
This conflicts with the feature that allows using user-managed
externally attached buffers in an application.
Fall back to SPRQ in case external buffers mempool is configured.
The limitation is already documented in mlx5 guide.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The default CPU socket ID was used while creating the Rx queue and this caused
creation failure in case if hardware was not resided on the default socket.
The patch sets the correct CPU socket ID for the mlx5_rxq_ctrl before
calling the mlx5_rxq_create_devx_rq_resources() which eventually calls
mlx5_devx_rq_create() with correct CPU socket ID.
Fixes: bc5bee028e ("net/mlx5: create drop queue using DevX")
Cc: stable@dpdk.org
Signed-off-by: Thinh Tran <thinhtr@linux.vnet.ibm.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If there are an explicit port match and sample action in the same flow,
mlx5 PMD pushes the explicit port match in the prefix subflow, and
uses the tag item match in the suffix subflow.
The explicit port match was translated into source vport match so
the sample suffix subflow lost this match after flow split.
This patch copies the explicit port match to the sample suffix subflow,
and the latter gets the correct source vport value in the flow matcher.
Fixes: b4c0ddbfcc ("net/mlx5: split sample flow into two sub-flows")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
A flow rule with sample action was split into two sub-flows,
and the implicit tag action with unique id was added in the prefix
sub-flow, the suffix sub-flow used the tag item to match with that
unique id, and the implicit set tag action was inserted next to
the sample action.
While there's either PUSH VLAN action or ENCAP action preceding the
sample action, implicit set tag action was added after PUSH VLAN or
ENCAP actions, causing flow creation failure due to rdma-core
does not support this action order.
This patch ensures the implicit set tag action is inserted before
either PUSH VLAN or encap action (if any) in the prefix sub-flow.
Fixes: 6a951567c1 ("net/mlx5: support E-Switch mirroring and jump in one flow")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
For now, only one ASO action is supported in a single flow rule.
Flow rule with more than one ASO action should be rejected in the
validation stage.
Flow rule with action non-shared AGE and COUNT together should be
treated as non-ASO because AGE will fall back to use HW counter,
not ASO hit object.
Group 0 will use HW counter for AGE action even if no COUNT action.
This commit will reject patterns (no matter which group if transfer)
like:
1. group 1 pattern... / end actions age / meter / end
2. group 1 pattern... / end actions conntrack / meter / end
3. group 1 pattern... / end actions age / conntrack... / end
If AGE comes together with COUNT in the above patterns, it's allowed.
Fixes: daed4b6e ("net/mlx5: use aging by counter when counter exists")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
A flow rule with sample action will be split into two sub flows,
and a tag action was added implicitly in the sample prefix sub flow,
the reserved metadata regC index was used for this tag action.
The reserved metadata regC was shared with metering action,
for ConnectX-5 trusted device (VF/SF), the reserved metadata regC was
invalid since PF only supported the legacy metering.
This patch adds the checking for the tag index and back to use the
application tag if a failure happened.
Fixes: a9b6ea45be ("net/mlx5: fix tag ID conflict with sample action")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Flow domain and direction was validated when OF_PUSH_VLAN action
appears in flow actions. Flow was rejected whenever this action:
- was used in NIC domain, in ingress direction;
- was used in FDB domain, in ingress direction, on ConnectX-5.
This validation logic rejected a valid case when the OF_PUSH_VLAN
action was used when directing traffic to the hairpin queue,
configured in TX implicit mode.
This patch moves code responsible for OF_PUSH_VLAN validation of
domain and direction from flow_dv_validate_push_vlan() to
flow_dv_validate(). Domain and direction are now validated when either
non-hairpin queue is used or hairpin queue is configured in Tx explicit
mode.
Fixes: 96f85ec489 ("net/mlx5: check VLAN push/pop support")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Disable means there is no packet drop in the meter. Meter is
active always but programmed with another CIR/CBS value.
If the user wants to disable the meter in creation, PMD calls
the disable() API manually after meter initialized.
Fixes: 4443201863 ("net/mlx5: support meter creation with policy")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
None Rx queue configured in a DPDK application should be supported.
In this mode, the NIC can be used to generate packets without
receiving any ingress traffic.
In the current implementation, once there is no Rx queue specified,
the array to store the queues' pointers is NULL after allocation.
Then the checking of the array allocation prevents the application
from starting up.
By adding another condition checking of the Rx queue number, the
application with none Rx queue can start up successfully.
Fixes: 4cda06c3c3 ("net/mlx5: split Rx queue into shareable and private")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
E-Switch DV flow is supported only when DV flow is supported and
enabled.
The mlx5_shared_dev_ctx_args_config() function ensures that when the
environment does not support DV, the "dv_esw_en" flag is turned off.
However, when the environment is supportive but the user has requested
to disable it, the "dv_esw_en" flag remains on and causes the PMD to try
to create an E-Switch through the Verbs engine.
This patch adds check to ensure that "dv_esw_en" flag will be turned off
when DV flow is disabled.
Fixes: a13ec19c19 ("net/mlx5: add shared device context config structure")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When using Verbs flow engine to create flows, GRE Verbs spec was put at
the end of specs list. This created problems for flows matching MPLSoGRE
packets. In generated specs list MPLS spec was put before GRE spec, but
Verbs API requires that MPLS spec must be put in its exact location in
protocol stack.
This patch fixes this behavior. Space for GRE Verbs spec is reserved at
its exact location. MPLS Verbs is inserted at its exact location as
well. GRE spec is filled after all flow items are parsed.
Fixes: 985b479267 ("net/mlx5: fix GRE protocol type translation for Verbs")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Flex item availability is restricted to BlueField-2 and BlueField-3
PF ports.
The patch validates port type compliance before proceeding to
flex item creation.
Fixes: db25cadc08 ("net/mlx5: add flex item operations")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The meter policy creation doesn't belong to flow rule creation
process, so thread workspace was not initialized and there will be
assert error when using it.
This patch removes the incorrect using of thread workspace in meter
policy creation, and adds a flag in policy instead. When creating
flow rule, can use the flag to set the mark flag in thread workspace.
Fixes: 082becbf1f ("net/mlx5: fix mark enabling for Rx")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In the previous implementation, a count was used to record the number
of the references to a table resource, including the creation of the
table, the jumping to the table and the matchers created on the
table. Before releasing the table resource via the driver, it needed
to ensure that there is no reference to this table.
After the optimization of the resources management, the reference
count now is in the hash list entry as a unified solution for all the
resources management.
There is no need to keep the "refcnt" in the table resource
structure. It is removed in case that there is some unnecessary
memory overhead.
Fixes: afd7a62514 ("net/mlx5: make flow table cache thread safe")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Certain flow rules containing a modify header action for an L4 port
could be erroneously rejected as invalid, because this action
was counted as consuming two HW actions, while it only requires one.
Fixes: 72a944dba1 ("net/mlx5: fix header modify action validation")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When an indirection table object is modified, it updates the reference
counter for each RX queue related to it.
The reference counter for regular queues are indeed updated. However,
the reference counter for external RxQs are not.
This patch adds updating for external RxQs too.
Fixes: 311b17e669 ("net/mlx5: support queue/RSS actions for external Rx queue")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When an indirection table is destroyed, each Rx queue related to it,
should be dereferenced.
The regular queues are indeed dereferenced.
However, the external RxQs are not.
This patch adds dereferencing for external RxQs too.
Fixes: 311b17e669 ("net/mlx5: support queue/RSS actions for external Rx queue")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When E-Switch mode was enabled, the NIC egress flows was implicitly
appended with source vport to match on. If the metadata register C0
was used to maintain the source vport, it was initialized to zero
on packet steering engine entry, the flow could be hit only
if source vport was zero, the register C0 of the packet was not correct
to match in the TX side, this caused egress flow misses.
This patch:
- removes the implicit source vport match for NIC egress flow.
- rejects the NIC egress flows on the representor ports at validation.
- allows the internal NIC egress flows containing the TX_QUEUE items in
order to not impact hairpins.
Fixes: ce777b147b ("net/mlx5: fix E-Switch flow without port item")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
When both shared and non-shared RSS actions are present in single
flow rule shared RSS index is unset by mistake.
For example:
1. flow indirect_action 0 create action_id 3 ingress action RSS ...
2. set sample_actions 0 mark id 43690 / queue index 0 / end
3. flow create 0 ingress group 107 pattern eth / sample ratio 2
index 0 / indirect 3 / end
PMD translates the indirect action to a shared RSS description at first.
In the split prefix flow, RSS->shared_RSS is unset when translating
sample queue action, the subfix flow will treat the RSS as non-shared.
Fixes: 8e61555657 ("net/mlx5: fix shared RSS and mark actions combination")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
RSS expansion scheme has 2 operational modes: default and specific.
The default mode expands into all valid options for a given network
layer. For example, Ethernet expands by default into VLAN, IPv4 and
IPv6, L3 expands into TCP and UDP, etc.
The specific mode expands according to flow item next protocol
configuration provided by the item spec and mask parameters.
There are 3 outcomes for the specific expansion:
1. Back to default – that is the case when result of (spec & mask)
allows all possibilities.
For example: eth type mask 0 type spec 0
2. No results – in that case item configuration has no valid expansion.
For example: eth type mask 0xffff type spec 101
3. Direct - In that case flow item mask and spec configuration return
valid expansion option.
Example: eth type mask 0x0fff type spec 0x0800.
Current PMD expands flow items with explicit spec and mask
configuration into the Direct(3) or No results (2). Default expansions
were handled as No results.
Fixes: f3f1f576f4 ("net/mlx5: fix RSS expansion with explicit next protocol")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Flex item API provides support for network header with a fixed and
variable lengths.
When PMD compiles a new flex item object configuration it converts
RTE parameters into matching PMD PARSE_GRAPH parameters and checks
the parameter values against port capabilities.
Current implementation mismatched PARSE_GRAPH configuration fields
for the fixed size header.
Fixes: b293e8e49d ("net/mlx5: translate flex item configuration")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
On TCP/IP-based layered network, ICMP is considered and implemented
as part of layer 3 IP protocol. Actually, it is a user of the IP
protocol and must be encapsulated within IP packets. There is no
layer 4 protocol over ICMP.
The rule with layer 4 should be matched prior to the rule only with
layer 3 pattern when:
1. Both rules are created in the same table
2. Both rules could be hit
3. The rules has the same priority
The steering result of the packet is indeterministic if there are
rules with patterns IP and IP+ICMP in the same table with the same
priority. Like TCP / UDP, a packet should hit the rule with a longer
matching criterion.
By treating the priority of ICMP/ICMPv6 as a layer 4 priority in the
PMD internally, the IP+ICMP will be hit in prior to IP only.
Fixes: d53aa89aea ("net/mlx5: support matching on ICMP/ICMP6")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reduce flex item flow handle size from 32 bits to 8 bits for each
flow.
The patch will save memory in setups with millions of flows.
Fixes: a23e9b6e3e ("net/mlx5: handle flex item in flows")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GRE item translation must set inner protocol value.
For that reason the item is not translated inplace when PMD
translation iterates over flow items, but moved after the loop, when
all inner types are discovered.
If PMD does not translate GRE flow item inside the translation loop
it must save the GRE item for access outside the loop.
Fixes: 985b479267 ("net/mlx5: fix GRE protocol type translation for Verbs")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The AGE action can be implemented by either counters or ASO mechanism.
ASO is more efficient than generating counters just for the purpose of
aging, so when ASO is supported its use is preferable. On the other
hand, when there is count in the list of actions, the counter is already
generated, and it is best to use it for aging even if ASO is supported.
On the other hand, when the count action is "indirect", it cannot be
used for aging since it may be updated from other flow rules in which it
participates.
Checking whether ASO is supported depends on both the capability of the
device and the flow rule group number, ASO is not supported for group 0.
However, the flow_dv_validate() function only checks the capability and
ignores the group, allowing inadmissible flow rules.
For example, when the device supports ASO and a flow rule is set that
combines an indirect counter with aging for group 0, the rule should be
rejected, but it is created and does not function properly.
This patch updates the counter validation which will also consider the
group number when deciding if there is ASO support.
Fixes: daed4b6e3d ("net/mlx5: use aging by counter when counter exists")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The AGE action can be implemented by either counters or ASO mechanism.
When user ask count action in the flow rule, AGE action is implemented
by the same counter. However, if user ask indirect count action, it
cannot be used for AGE.
The flow_dv_validate() function has a flag named "shared_count" which
indicates whether AGE action validate depends on ASO support or not.
This flag is initialized to false and is updated if there is indirect
count action in the action list.
This flag is mistakenly set within the loop that reads the action list
and in each iteration it is reinitialized to false, regardless of the
existence of an indirect count action in the list.
This patch moves the flag initialization out of the loop.
Fixes: f3191849f2 ("net/mlx5: support flow count action handle")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The table remove callback function is trying to destroy the
matchers list associated with table entries without checking
if the list is valid, which causes null pointer dereference.
Fixed by validating the matchers list before destroying it.
Issue can be reproduced with testpmd on Windows, when you run:
port close all
Fixes: 1872635570 ("net/mlx5: make matcher list thread safe")
Cc: stable@dpdk.org
Signed-off-by: Adham Masarwah <adham@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
For indexed pool with local cache, when a new trunk is allocated,
half of the trunk's index was fetched to the local cache. In case
of local cache size was less then half of the trunk size, memory
overlap happened.
This commit adds the check of the fetch size, if local cache size
is less than fetch size, adjust the fetch size to be local cache
size.
Fixes: d15c0946be ("net/mlx5: add indexed pool local cache")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Link status change takes time that depends on the HW and the kernel.
It was checked immediately after the change was issued at probing.
If the port had been down before probing, a "down" state may be read,
while the port would be "up" imminently.
After that, DPDK reported the port as "down" mistakenly
and "ifconfig $DEV up" did not trigger an LSC event,
because from the system's perspective the port was "up" already.
Install Netlink event handler at port probe before requesting the port
to come up in order to receive LSC event even if it comes up
between probe and start.
Fixes: a85a606ca5 ("net/mlx5: fix link status initialization")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Sometimes net/mlx5 devices did not detect link status change to "up".
Each shared device was monitoring IBV_EVENT_PORT_{ACTIVE,ERR}
and queried the link status upon receiving the event.
IBV_EVENT_PORT_ACTIVE is delivered when the logical link status
(UP flag) is set, but the physical link status (RUNNING flag)
may be down at that time, in which case the new link status
would be erroneously considered down.
IBV interface is insufficient for the task.
Monitor interface events using Netlink.
Fixes: 198a3c339a ("mlx5: handle link status interrupts")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Introduce mlx5_nl_read_events() to read Netlink events
(technically, messages) from a socket that was configured
to listen for them via a new mlx5_nl_init() parameter.
Add mlx5_nl_parse_link_status_update() helper
to extract information from link-related events.
This patch is a shared base for later fixes.
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support queue/RSS action for external Rx queue.
In indirection table creation, the queue index will be taken from
mapping array.
This feature supports neither LRO nor Hairpin.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>