tainted_data_downcast: Downcasting match_item->meta from void * to
struct virtchnl_proto_hdrs implies that the data that this pointer points
to is tainted.
var_assign_var: Assigning: proto_hdrs = match_item->meta.
Both are now tainted.
var_assign_var: Assigning: rss_meta->proto_hdrs = *proto_hdrs. Both are
now tainted.
Passing tainted expression "rss_meta->proto_hdrs.count" to
"iavf_refine_proto_hdrs", which uses it as a loop boundary.
Removed temporary variable 'proto_hdrs', and copied whole memory of
match_item meta with exact structure size to avoid data downcast.
Coverity issue: 381131
Fixes: 91f27b2e39 ("net/iavf: refactor RSS")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For NIC I40E_10G-10G_BASE_T_X722, when the port is configured with
link speed, it cannot receive jumbo frame packets.
Because it set maximum frame size failed when starts the port that
the port link status is still down.
This patch fix the error that starts the port will force set maximum
frame size.
Fixes: 2184f7cdee ("net/i40e: fix max frame size config at port level")
Cc: stable@dpdk.org
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Tested-by: Dukai Yuan <dukaix.yuan@intel.com>
Passing tainted expression "msg.data_len" to
"rte_memcpy", which uses it as a loop boundary.
Replace tainted expression with a temp variable
to avoid the tainted scalar coverity warning.
Coverity issue: 381688
Fixes: fb4ac04e9b ("common/idpf: introduce common library")
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
To make X722's PCTYPE is compatible with X710, the PCTYPE in the
FD programming descriptor is translated into different types by using
GLQF_FD_PCTYPE table. But the types of 'UNICAST_IPV4_UDP'
and 'MULTICAST_IPV4_UDP' are only supported for X722, so that
the corresponding registers can not be configured after translation.
This patch removes the transition before the FD filter is programmed.
Fixes: ef4c16fd91 ("net/i40e: refactor RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
Tested-by: Lingli Chen <linglix.chen@intel.com>
HW VLAN offload cannot be enabled because the HW capability flags
are not set correctly.
Fixes: eff56a7b9f ("net/iavf: add offload path for Rx AVX512")
Cc: stable@dpdk.org
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When a vsi that already exists in the created vsi_list subscribes to the
same filter again, the return value ICE_SUCCESS results in duplicate flow
rules to be stored, which will cause 'flush' and 'destroy' errors.
Fixes: fed0c5ca5f ("net/ice/base: support programming a new switch recipe")
Cc: stable@dpdk.org
Signed-off-by: Yiding Zhou <yidingx.zhou@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Protocol header count should be changed when tunnel level is larger than 1.
Fixes: 0b241667cc ("net/iavf: fix tainted scalar")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch fixes the building of context and data descriptor
on the scalar path for IPSec.
Fixes: f7c8c36fde ("net/iavf: enable inner and outer Tx checksum offload")
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
There's build error with clang 3.4.2 in CentOS 7:
drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant
522 with expression of type 'enum virtchnl_ops' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]
Fixes: 549343c25d ("net/idpf: support device initialization")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Ali Alnubani <alialnu@nvidia.com>
PMD attempt to read HW UTC counter properties can fail because the feature
has no support in port FW or mlx5 kernel module.
In that case PMD still can produce correct time-stamps if it runs on core
with nanosecond time resolution.
Fixes: b006786095 ("common/mlx5: update log for DevX general command failure")
Cc: stable@dpdk.org
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Currently there's limitation for Drop action that can only co-exist with
Count action.
Sample and Age actions are also able to exist with Drop within the same
flow, and this patch includes them in the Drop action validation.
Fixes: 70faf9ae0a ("net/mlx5: unify validation of drop action")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If HW Steering is enabled, Rx queues were configured to receive MARKs
when a table with MARK actions was created. After stopping the port,
Rx queue configuration is released, but during starting the port
the mark flag was not updated in the Rx queue configuration.
This patch introduces a reference count on the MARK action and it
increases/decreases per template_table create/destroy.
When the port is stopped, Rx queue configuration is not cleared if
reference count is not zero.
Fixes: 3a2f674b6a ("net/mlx5: add queue and RSS HW steering action")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The shared IB device (sh) has per port data with filed for interrupt
handler port_id. It used by shared interrupt handler to find the
corresponding rte_eth device by IB port index.
If value is equal or greater RTE_MAX_ETHPORTS it means there is no
subhandler installed for specified IB port index.
When a few ports are created under same sh, the sh is created with the
first port and the interrupt handler port_id is initialized to
RTE_MAX_ETHPORTS for each port.
In port creation, the interrupt handler port_id is updated with the
correct value. Since this updating, the mlx5_dev_interrupt_nl_cb
function uses this port and its priv structure.
However, when the ports are closed, this filed isn't updated and the
interrupt handler continue working until it is uninstalled in SH
destruction.
If mlx5_dev_interrupt_nl_cb is called between port closing and SH
destruction, it uses invalid port causing a crash.
This patch adds interrupt handler port_id updating to the close function
and add memory barrier to make sure it is done before priv reset.
Fixes: 655c3c26c1 ("net/mlx5: fix initial link status detection")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If application provided maximal LRO size was less than expected PMD
minimum, the PMD either crashed with assert, if asserts were enabled,
or proceeded with port initialization to set port private maximal
LRO size below supported minimum.
The patch terminates port start if LRO size
does not match PMD requirements and TCP LRO offload was requested
at least for one Rx queue.
Fixes: 50c00baff7 ("net/mlx5: limit LRO size to maximum Rx packet")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Local cache for an indexed pool is not initialized in the situation when
all the indices are allocated on one CPU core and freed on another one.
That leads to a crash once we try to check its reference counter.
Check that the local cache is initialized before accessing this counter.
Fixes: d15c0946be ("net/mlx5: add indexed pool local cache")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch fixes the matcher disconnection handling, by removing the RTC
references from flow table if the currently removed matcher was the last
one for the given table. As a result RTC in this matcher can be
correctly freed, since there are no dangling references to the RTC.
Fixes: c467608215 ("net/mlx5/hws: add matcher object")
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch fixes the order dereferencing default FDB miss table and
destroying the flow table object. Flow table should be destroyed
before the dereference.
Fixes: 394cc7ba40 ("net/mlx5/hws: add table object")
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The NIC since 6DX supports multiple timestamp formats
in CQEs configured via firmware. If real time timestamp
format has been configured the correct attributes should
be specified on queue creation via DevX. These attributes
setting was missed on steering queue creation and hardware
steering initialization failed.
Fixes: 3eb748869d ("net/mlx5/hws: add send layer")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When creating meter policy rules, it's possible to use flow items
translation to add src port match criteria. Currently the items
translation process needs to get thread workspace to store vport
metadata tag, but in policy creation, the thread workspace was not
initialized so it will cause assert failure.
This patch adds initialization of thread-local workspace when creating
meter policy rules to avoid that assert.
Fixes: e9de8f33ca ("net/mlx5: fix source port checking in sample flow rule")
CC: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In order to share flow items translate code, flow items translation
of spec and mask was split individually.
In that case, the assert for GENEVE option length with mask becomes
invalid, since the length in mask is bitmask. And as memcpy around
the assert already checks the GENEVE option length, the assert looks
redundant.
This commit removes the unneeded GENEVE option length assert.
Fixes: cd4ab74206 ("net/mlx5: split flow item matcher and value translation")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Packets can be split into several mbufs with various data sizes.
There is no limitation on how small these segments can be.
But there is a limitation on Tx side for inline configuration:
send WQEs with inline headers less than the required are dropped.
The very first segment must be more than minimal inline eth segment.
Enforce this requirement by merging a few segments in this case.
Fixes: ec837ad0fc ("net/mlx5: fix multi-segment inline for the first segments")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When a meter policy contains a fate action of port_id, the policy flow
must match the src port to which the policy belongs. However, this meter
cannot be used by a flow that matches another src port.
This patch fixes this by adding a new policy flow matching the new src
port from the user flow dynamically, but then the meter cannot be used
by a flow that matches all the ports.
Fixes: 48fbc1be82 ("net/mlx5: fix meter policy flow match item")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If any meter in the hierarchy has a policy flow containing set_tag or
modify_field action, the policy flow must match the src port to which
the policy belongs, to determine the order of modify_hdr and
meter action. But the meter hierarchy will not be able to use by
user flow that matches another src port.
To use this type of meter hierarchy for other src ports, we need to add
a new policy flow matching the new src port from the user flow
dynamically. But then it cannot be used by flow matching all ports.
Fixes: ca7e6051e7 ("net/mlx5: limit meter flow when matching all ports")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When creating flow matching port representor item with meter action, it
will fail due to incorrect parsing the item.
This patch fixes this issue by adding the correct item parse for port
representor in validation.
Fixes: 707d5e7d79 ("net/mlx5: support flow matching on representor ID")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Before this patch any flow rule which works on hairpin queues
and which has OF_SET_VLAN_VID action was split into 2 flow rules:
- one subflow for Rx,
- one subflow for Tx.
OF_SET_VLAN_VID action was always placed in the Tx subflow.
Assuming a flow rule which matches VLAN traffic and has both
OF_SET_VLAN_VID action, and MODIFY_FIELD action on VLAN VID,
but no OF_PUSH_VLAN action, the following happened:
- MODIFY_FIELD action was placed in Rx subflow,
- OF_SET_VLAN_VID action was placed in Tx subflow,
- OF_SET_VLAN_VID action is internally compiled to a header modify
command.
This caused the following issues:
1. Since OF_SET_VLAN_VID was placed in Tx subflow, 2 header modify
actions were allocated. One for Rx and one for Tx.
2. If OF_SET_VLAN_VID action was placed before MODIFY_FIELD on VLAN VID,
the flow rule executed header modifications in reverse order.
MODIFY_FIELD actions were executed first in the Rx subflow and
OF_SET_VLAN_VID was executed second in Tx subflow.
This patch fixes this behavior by not splitting hairpin flow rules
if OF_SET_VLAN_VID action is used without OF_PUSH_VLAN.
On top of that, if flow rule is split, the OF_SET_VLAN_VID action
is not moved to Tx subflow (for flow rules mentioned above).
Fixes: 210008309b ("net/mlx5: fix VLAN push action on hairpin queue")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
On context initialization the reparse capability support
for NIC and FDB tables was required for allowing HWS. This
caused a problem for devices that only want to run NIC
steering and are not the esw-manager fow which FDB reparse
is disabled. Modified the check to require FDB reparse only for
esw-manager.
Fixes: b0290e56dd ("net/mlx5/hws: add context object")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Fix segmentation fault when a user will request to allocate
a HWS action while current device doesn't support HWS.
Fixes: f8c8a6d844 ("net/mlx5/hws: add action object")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When the maximum action combination in RX is used we can get
a segfault due to an incorrect max array size define.
This bug can happen on RX/TX or FDB in the most complex cases.
Current max was set to 7, but actual max is:
Max TX: 8, Max RX: 10, Max FDB: 9
Fixes: f8c8a6d844 ("net/mlx5/hws: add action object")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The sysconf call can return a negative value (-1) on failure
this will lead to posix_memalign to fail. This is not a realistic
case which was found by the static checkers.
Coverity issue: 381674
Fixes: 3eb748869d ("net/mlx5/hws: add send layer")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
There is a check for the configuration match between
all the Rx queues shared among multiple ports in DPDK.
This check ensures that the configuration is the same.
The issue is this check takes place before the queue
is released and configured again in case of reconfiguration.
That leads to checking against the old configuration and
preventing the shared Rx queue to start properly.
Release the old configuration and prepare a new Rx queue
before checking that its parameters match the config.
Fixes: 09c2555303 ("net/mlx5: support shared Rx queue")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When testpmd quit with mlx5 avail_thresh enabled, a rte timer handler
delays to reconfigure rx queue to re-arm this event. However at the same
time, testpmd is destroying rx queues.
It's never a valid use case for mlx5 avail_thresh. Before testpmd quit,
user should disable avail_thresh configuration to not handle the events.
This is documented in mlx5 driver guide.
To avoid the crash in such use case, check port status, if it is not
RTE_PORT_STARTED, don't process the avail_thresh event.
Fixes: f41a5092e6 ("app/testpmd: add host shaper command")
Cc: stable@dpdk.org
Signed-off-by: Spike Du <spiked@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The macro HAVE_MLX5_HWS_SUPPORT was introduced for HWS only. And
HWS was not supported on Windows. So macro HAVE_MLX5_HWS_SUPPORT
should be only around the code which HWS uses, but avoid including
the code block shared by Linux and Windows.
Fixes: 22681deead ("net/mlx5/hws: enable hardware steering")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
There's an issue introduced by the change of splitting item matcher
and value translation, that the matcher mask value for color is not
set correctly in meter policy flow creation.
This patch fixes this by providing the correct color mask.
Fixes: cd4ab74206 ("net/mlx5: split flow item matcher and value translation")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
MLX5 PMD counted each mempool subscribe invocation. The PMD expected
that the mempool subscription will be deleted after the mempool
counter dropped to 0. However, current PMD design unsubscribes mempool
callbacks only once.
As the result, the PMD destroyed mlx5_common_device but kept
shared RX subscription callback. EAL tried to activate that callback
and crashed.
The patch removes mempool subscriptions counter.
The PMD registers mempool subscription once only. An attempt
to register existing subscription returns EEXIST.
Also, the PMD expects to remove subscription when mempool unsubscribe
was activated.
Fixes: 8ad97e4b32 ("common/mlx5: fix multi-process mempool registration")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Since [1] flow API forbids usage of direction attributes in transfer
flow rules. This patch adapts mlx5 PMD to this requirement.
From this patch, flow rule validation in mxl5 PMD will reject transfer
flow rules with any of the direction attributes set
(i.e. 'ingress' or 'egress').
As a result flow rule can only have one of 'ingress', 'egress' or
'transfer' attributes set.
This patch also changes the following:
- Control flow rules used in FDB are 'transfer' only.
- Checks which assumed that 'transfer' can be used
with 'ingress' and 'egress' are reduced to just checking
for direction attributes, since all attributes are exclusive.
- Flow rules for updating flow_tag are created for both ingress
and transfer flow rules which have MARK action.
- Moves mlx5_flow_validate_attributes() function from generic flow
implementation to legacy Verbs flow engine implementation,
since it was used only there. Function is renamed accordingly.
Also removes checking if E-Switch uses DV in that
function, since if legacy Verbs flow engine is used,
then that is always not the case.
[1] commit bd2a4d4b2e ("ethdev: forbid direction attribute in
transfer flow rules")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
While the ASO action(AGE, CT) with the sample action in the one
E-switch mirror flow, due to hardware limitation, the ASO action
after the sample action was not supported.
This patch adds the checking for this validation and reject the flows
with aso action after sample.
Fixes: f935ed4b64 ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The metadata register C value was lost in FDB egress while doing the
flow sampler on ConnectX-5. The FDB direction checking was decided by
the source port in the flow creation. If there's additional port item
was added in the flow match, then the actual source port was changed.
This patch adds the checking for the port id item:
RTE_FLOW_ITEM_TYPE_PORT_ID, RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
and RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR,
then updates FDB egress checking and the source vport metadata
from the port item, also updates the PUSH VLAN, POP VLAN and
flow sampler action validation.
Fixes: 04c0d3f20f ("net/mlx5: fix port matching in sample flow rule")
Fixes: 255b8f86eb ("net/mlx5: fix E-Switch egress mirror flow validation")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
MLX5_FLOW_ACTION flags are used as uint64_t now, but some old flags
are not defined as 64 bits. So if they are type casted to uint64 after
bitwise operations, the high 32-bit data might be incorrect.
E.g. Currently MLX5_FLOW_ACTION_DROP is defined as 0x1u, when it is used
like:
(action_flags & ~MLX5_FLOW_ACTION_DROP)
action_flags is uint64_t so (~MLX5_FLOW_ACTION_DROP) will be casted to
uint64_t as well, but its high 32 bits will be all 0s. This will make the
result not as expected.
This patch fixes this by making all action flags definition as 64-bit
data type.
Fixes: 4b7bf3ffb4 ("net/mlx5: support yellow in meter policy validation")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
There is a by-design assumption in the code that the global counter
rings can contain all the port counters.
So, enqueuing to these global rings should always succeed.
Add assertions to help for debugging this assumption.
In addition, change mlx5_hws_cnt_pool_put() function to return void due
to those assumptions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
Add assertions to help debug in case of counter double alloc/free.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
The __hws_cnt_r2rcpy() function copies elements from one zero-copy ring
to another zero-copy ring in place.
This routine needs to consider the situation that the address was given
by source and destination could be both wrapped.
It uses 4 different "n" local variables to manage it:
- n: Number of elements to copy in total.
- n1: Number of elements to copy from ptr1, it is the minimal value
from source/dest n1 field.
- n2: Number of elements to copy from src->ptr1 to dst->ptr2 or from
src->ptr2 to dst->ptr1, this variable is 0 when both source and
dest n1 field are equal.
- n3: Number of elements to copy from src->ptr2 to dst->ptr2.
The function copies the first n1 elements. If n2 isn't zero it copies
more elements and check whether n3 is zero.
This logic is wrong since n3 may be bigger than zero even when n2 is
zero. This scenario is commonly happening in counters when the internal
mlx5 service thread copies elements from the reset ring into the reuse
ring.
This patch changes the function to copy n3 regardless of n2 value.
Fixes: 4d368e1da3 ("net/mlx5: support flow counter action for HWS")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
The HWS counter has 2 different identifiers:
1. Type "cnt_id_t" which represents the counter inside caches and in
the flow structure. This index cannot be zero and is mostly called
"cnt_id".
2. Internal index, the index in counters array with type "uint32_t".
mostly it is called "iidx".
The second ID is calculated from the first using "mlx5_hws_cnt_iidx()"
function.
When a direct counter is allocated, if the queue cache is not empty, the
counter represented by cnt_id is popped from the cache. This counter may
be invalid according to the query_gen field. Thus, the "iidx" is parsed
from cnt_id and if it is valid, it is used to update the fields of the
counter structure.
When this counter is invalid, all the cache is flashed and new counters
are fetched into the cache. After fetching, another counter represented
by cnt_id is taken from the cache.
Unfortunately, for updating fields like "in_used" or "age_idx", the
function wrongly may use the old "iidx" coming from an invalid cnt_id.
Update the "iidx" in case of an invalid counter popped from the cache.
Fixes: 4d368e1da3 ("net/mlx5: support flow counter action for HWS")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
Counter management structure has array of counter pools. This array is
invalid in management structure initialization and grows on demand.
The resizing include:
1. Allocate memory for the new size.
2. Copy the existing data to the new memory.
3. Move the pointer to the new memory.
4. Free the old memory.
The third step can be performed before for this function, and compiler
may do that, but another thread might read the pointer before coping and
read invalid data or even crash.
This patch allocates memory for this array once in management structure
initialization and limit the counters number by 16M.
Fixes: 3aa279157f ("net/mlx5: synchronize flow counter pool creation")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
As the queue-based aging API has been integrated[1], the flow aging
action support in HWS steering code can be enabled now.
[1]: https://patchwork.dpdk.org/project/dpdk/cover/
20221026214943.3686635-1-michaelba@nvidia.com/
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The use of rte_atomic functions is deprecated and is not
required in HWS code. HWS refcounts are used only during
control and always under lock.
Fixes: f8c8a6d844 ("net/mlx5/hws: add action object")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In this patch, we removed the necessity of the version files and
you don't need to update these files for each release, you can just
remove them.
Suggested-by: Ferruh Yigit <ferruh.yigit@amd.com>
Signed-off-by: Abdullah Ömer Yamaç <omer.yamac@ceng.metu.edu.tr>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>