When using Verbs flow engine to create flows, GRE Verbs spec was put at
the end of specs list. This created problems for flows matching MPLSoGRE
packets. In generated specs list MPLS spec was put before GRE spec, but
Verbs API requires that MPLS spec must be put in its exact location in
protocol stack.
This patch fixes this behavior. Space for GRE Verbs spec is reserved at
its exact location. MPLS Verbs is inserted at its exact location as
well. GRE spec is filled after all flow items are parsed.
Fixes: 985b479267 ("net/mlx5: fix GRE protocol type translation for Verbs")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GRE item translation must set inner protocol value.
For that reason the item is not translated inplace when PMD
translation iterates over flow items, but moved after the loop, when
all inner types are discovered.
If PMD does not translate GRE flow item inside the translation loop
it must save the GRE item for access outside the loop.
Fixes: 985b479267 ("net/mlx5: fix GRE protocol type translation for Verbs")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit adds the queue and RSS action. Similar to the jump action,
dynamic ones will be added to the action construct list.
Due to the queue and RSS action in template should not be destroyed
during port restart, the actions are created with standalone indirect
table as indirect action does. When port stops, detaches the indirect
table from action, when port starts, attaches the indirect table back
to the action.
One more change is made to accelerate the action creation. Currently
the mlx5_hrxq_get() function returns the object index instead of object
pointer. This introduced an extra converting the index to the object by
calling mlx5_ipool_get() in most of the case. And that extra converting
hurts multi-thread performance since mlx5_ipool_get() uses the global
lock inside. As the hash Rx queue object itself also contains the index,
returns the object directly will achieve better performance without the
global lock.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To optimize datapath, the mlx5 pmd checked for mark action on flow
creation, and flagged possible destination rxqs (through queue/RSS
actions), then it enabled the mark action logic only for flagged rxqs.
Mark action didn't work if no queue/rss action was in the same flow,
even when the user use multi-group logic to manage the flows.
So, if mark action is performed in group X and the packet is moved to
group Y > X when the packet is forwarded to Rx queues, SW did not get
the mark ID to the mbuf.
Flag Rx datapath to report mark action for any queue when the driver
detects the first mark action after dev_start operation.
Fixes: 8e61555657 ("net/mlx5: fix shared RSS and mark actions combination")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When application creates several flows to match on GRE tunnel without
explicitly specifying GRE protocol type value in flow rules, PMD will
translate that to zero mask.
RDMA-CORE cannot distinguish between different inner flow types and
produces identical matchers for each zero mask.
The patch extracts inner header type from flow rule and forces it in
GRE protocol type, if application did not specify any.
Fixes: 84c406e745 ("net/mlx5: add flow translate function")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Maximum available flow priority was discovered using Verbs API
regardless of the selected flow engine. This required some Verbs
objects to be initialized in order to use DevX engine. Make priority
discovery an engine method and implement it for DevX using its API.
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
During the device spawn process, mlx5 PMD queried the available flow
priorities by calling mlx5_flow_discover_priorities, queried
if the DR drop action was supported on the root table by calling
the mlx5_flow_discover_dr_action_support routine, and queried the
availability of metadata register C by calling mlx5_flow_discover_mreg_c
These functions created the test flows to get the supported fields, and
at the end destroyed the test flows. The test flows in the first two
functions was created on the root table.
If the device was spawned with multiple representors, these test flows
were created and destroyed on each representor as well. The above
operations took a significant amount of init time during the device
spawn.
This patch optimizes the device discover functions, if there is
the device with multiple representors (VF/SF) being spawned,
the priority and drop action and metadata register support check can be
done only ones and check results can be shared for all representors.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
To detect number flow Verbs flow priorities, PMD try to create Verbs
flows in different priority. While Verbs is not designed to support
ports larger than 255.
When DevX supported by kernel driver, 16 Verbs priorities must be
supported, no need to create Verbs flows.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Create shared context device in common area and add it as a field of
common device.
Use this context device in all drivers and remove the ctx field from
their private structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Indirect actions should be used to do shared counters.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In the recent update, the misc5 matcher was introduced to
match VxLAN header extra fields. However, ConnectX-5
doesn't support misc5 for the UDP ports different from
VXLAN's standard one (4789).
Need to fall back to the previous approach and use legacy
misc matcher if non-standard UDP port is recognized
in VxLAN flow.
Fixes: 630a587bfb ("net/mlx5: support matching on VXLAN reserved field")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The RSS hash types defined in the API do not support setting the L4 proto
type (TCP or UDP) without setting the L3 proto. For example, ETH_RSS_TCP
is defined as
(ETH_RSS_NONFRAG_IPV4_TCP | \
ETH_RSS_NONFRAG_IPV6_TCP | \
ETH_RSS_IPV6_TCP_EX).
The L3 proto of the RSS hash type may be different than the one defined
in the pattern, for example:
testpmd> flow create .../ ipv4 / tcp / end actions rss types ipv6-tcp-ex
end / end
If the RSS hash type also includes L4 proto type as in the above example,
the selection flags for the RX hash are currently set with SPORT/DPORT
without setting SRC/DST IP. As this combination is not supported, it does
not match any of the pre-created TIRs of the indirect RSS action
and the flow creation fails.
The fix is to prevent setting the selection flags for the RX hash with
SPORT/DPORT without setting SRC/DST IP. It applies non-RSS processing of
the received packets. In case of indirect RSS action, it will match the
MLX5_RSS_HASH_NONE pre-created TIR.
Fixes: b1d63d8293 ("net/mlx5: support RSS on src or dst fields only")
Fixes: 4a78c88e3b ("net/mlx5: fix Verbs flow tunnel")
Cc: stable@dpdk.org
Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.
For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
Existing API supports counter action to count traffic of a single flow.
The user can share the count action among different flows using the
shared flag and the same counter ID in the count action configuration.
Recent patch [1] introduced the indirect action API.
Using this API, an action can be created as indirect, unattached to any
flow rule.
Multiple flows can then be created using the same indirect action.
The new API also supports query operation of an indirect action.
The new API is more efficient because the driver gets it's own handler
for the count action instead of managing a mapping between the user ID
to the driver handle.
Support create, query and destroy indirect action operations for flow
count action.
Application will use the indirect action query operation to query this
count action.
In the meantime the old sharing mechanism (with the sharing flag)
continues to be supported, and the user can choose the way he wants to
share the counter.
The new indirect action API is only supported in DevX, so sharing
counter action in Verbs can only be done through the old mechanism.
[1] https://mails.dpdk.org/archives/dev/2020-July/174110.html
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch separates Tx function declarations to different header file
in preparation for removing their implementation from the source file
and as an optional preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mlx5_rxtx.c file contains a lot of Tx burst functions, each of those
is performance-optimized for the specific set of requested offloads.
These ones are generated on the basis of the template function and it
takes significant time to compile, just due to a large number of giant
functions generated in the same file and this compilation is not being
done in parallel with using multithreading.
Therefore we can split the mlx5_rxtx.c file into several separate files
to allow different functions to be compiled simultaneously.
In this patch, we separate Rx function declarations to different header
file in preparation for removing them from the source file and as an
optional preparation step for further consolidation of Rx burst
functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In verbs, an empty VLAN is equivalent to a packet without VLAN layer,
hence, the VLAN item should not be empty and this case is rejected.
However, the case for ether type of VLAN without following VLAN item
was not validated, allowing the creation of a flow with empty
VLAN item.
To fix this issue a validation was added requiring ether type of VLAN
will be followed with VLAN item.
Fixes: 0b1edd21cd ("net/mlx5: refuse empty VLAN flow specification")
Cc: stable@dpdk.org
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, the maximal flow priority in non-root table to user
is 4, it's not enough for user to do some flow match by priority,
such as LPM, for one IPV4 address, we need 32 priorities for each
bit of 32 mask length.
PMD will manage 3 sub-priorities per user priority according to L2,
L3 and L4. The internal priority is 16 bits, user can use priorities
from 0 - 21843.
Those enlarged flow priorities are only used for ingress or egress
flow groups greater than 0 and for any transfer flow group.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
Use macro HAVE_INFINIBAND_VERBS_H to successfully compile files both
under Linux and Windows (or any non Linux in general). Under Windows
this macro:
1. Hides Verbs references.
2. Exposes required DV structs that are under ifdefs related to rdma
core.
Linux code under definitions such as #ifdef HAVE_IBV_FLOW_DV_SUPPORT is
required unconditionally under Windows however those definitions are
never effective without rdma-core presence. Therefore update the #ifdef
condition to consider HAVE_INFINIBAND_VERBS_H as well (undefined macro
when running without an rdma-core library).
For example:
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If xmedata mode 1 enabled and create a flow with RSS and mark action,
there was an error that rdma-core failed to create RQT due to wrong
queue definition. This was due to mixed flow creation in thread specific
flow workspace.
This patch introduces nested flow workspace(context data), each flow
uses dedicate flow workspace, pop and restore workspace when nested flow
creation done, the original flow with continue with original flow
workspace. The total number of thread specific flow workspace should be
2 due to only one nested flow creation scenario so far.
Fixes: 8bb81f2649 ("net/mlx5: use thread specific flow workspace")
Fixes: 3ac3d8234b ("net/mlx5: fix index when creating flow")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
As shared RSS action will be shared by multiple flows, the action
is created as global standalone action and managed only by the
relevant shared action management functions.
Currently, hrxqs will be created by shared RSS action or general
queue action. For hrxqs created by shared RSS action, they should
also only be released with shared RSS action. It's not correct to
release the shared RSS action hrxqs as general queue actions do
in flow destroy.
This commit adds a new fate action type for shared RSS action to
handle the shared RSS action hrxq release correctly.
Fixes: e1592b6c4d ("net/mlx5: make Rx queue thread safe")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Recent patch uses a local string array as input for function
rte_flow_error_set().
This stack memory may be later used by other code sections,
overwriting the desired error string.
This patch implements an error string for the specific case
requested, of ICMP item not supported in Verbs flow engine.
Fixes: d51475d1bf ("net/mlx5: support item type error message in flow Verbs")
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit applies the cache linked list to Rx queue to make it thread
safe.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit creates the global drop action for flows instead of
maintain it in flow insertion time. The uniqueu global drop action
makes it thread safe.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
As part of multi-thread flow support, this patch moves flow intermediate
data to thread specific, makes them a flow workspace. The workspace is
allocated per thread, destroyed along with thread life-cycle.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When creating a flow, the rule itself might not take effort
immediately once the function call returns with success. It would
take some time to let the steering synchronize with the hardware.
If the application wants the packet to be sent to hit the flow after
it is created, this flow sync API can be used to clear the steering
HW cache to enforce next packet hits the latest rules.
For TX, usually the NIC TX domain and/or the FDB domain should be
synchronized depends in which domain the flow is created.
The application could also try to synchronize the NIC RX and/or the
FDB domain for the ingress packets.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Update the flow verbs error message to "item type X not supported",
when it is not supported,
instead of a generic error message "item not supported".
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by
patch [1].
Using these fields, the application can match all the VLAN options by
single flow: any, VLAN only and non-VLAN only.
Add the support for the fields.
By the way, add the support for QinQ packets matching.
VLAN\QinQ limitations are listed in the driver document.
[1] https://patches.dpdk.org/patch/80965/
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Implement modification for hashed table of Rx queue object (see
mlx5_hrxq_modify()). This implementation relies on the capability to
modify TIR object via DevX API, i.e. current implementation doesn't
support verbs HW object operations. The functionality to modify hashed
table of Rx queue object is prerequisite to implement
rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Counter extend memory was allocated for non-batch counter to save the
extra DevX object. Currently, for non-batch counter which does not
support aging, entry in the generic counter struct is used only when
counter is free in free list, and bytes in the struct is used only when
counter is allocated in using.
In this case, the DevX object can be saved to the generic counter struct
union with entry memory when counter is allocated and union with bytes
when counter is free.
And pool type is also not needed as non-fallback mode only has generic
counter and aging counter, just a bit to indicate the pool is aged or
not will be enough.
This eliminates the counter extend info struct saves the memory.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The shared counters save the counter index to three level table. As
three level table supports multiple-thread operations now, the shared
counters can take advantage of the table to support multiple-thread.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, counter operations are not thread safe as the counter
pools' array resize is not protected.
This commit protects the container pools' array resize using a spinlock.
The original counter pool statistic memory allocate is moved to the
host thread in order to minimize the critical section. Since that pool
statistic memory is required only in query time. The container pools'
array should be resized by the user threads, the new pool may be used
by other rte_flow APIs before the host thread resize is done, if the
pool is not saved to the pools' array, the specified counter memory will
not be found as the pool is not saved to the counter management pool
array. The pool raw statistic memory will be filled in host thread.
The shared counters will be protected in other commit.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
A flow counter which was allocated by a batch API couldn't be assigned
to a flow in the root table (group 0) in old rdma-core version.
Hence, a root table flow counter required PMD mechanism to manage
counters which were allocated singly.
Currently, the batch counters have already been supported in root table
includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET
enum and with a kernel driver includes
MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.
When the PMD uses rdma-core API to assign a batch counter to a root
table flow using invalid counter offset, it should get an error only
if the batch counter assignment for root table is supported.
Using this trial in the initialization time can help to detect the
support.
Using the above trial, if the support is valid, remove the management of
single counter container in the fast counter mechanism. Otherwise, move
the counter mechanism to fallback mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Instead of using special memory to indicate shared counter, this patch
does the optimization to use the counter handler reserved memory to
indicate it. The counter index with MLX5_CNT_SHARED_OFFSET means the
shared counter.
This patch is also an arrangement for a new adjustment to use batch
counter as shared counter.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Commit [1] introduced different container for the aging counter
pools. In order to save container memory the aging counter pools
can be located in the general pool container.
This patch locates the aging counter pools in the general pool
container. Remove the aging container management.
[1] commit fd143711a6 ("net/mlx5: separate aging counter pool range")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds to MLX5 PMD the support of matching on IPv4
fragmented and non-fragmented packets, using the IPv4 header
fragment_offset field.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Move Rx queue drop action similar resources allocations from Verbs
module to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Rx queue drop creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Move Rx hash queue object similar resources allocations from DevX and
Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Rx hash queue creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Rx indirection table object creation into both Verbs and DevX
modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
PMD flow priority is different from application flow priority. Flow
rules with higher match granularity assigned higher PMD priority. Also
PMD splits internally RSS flows according to flow RSS layer.
Final PMD flow rule priority derived from the last match item network
level, after PMD adjusts flow rule, where L4 match gets the highest
priority and L2 the lowest.
The patch adjusts tunnels flow rule priority calculation for PMDs
running verb API.
Introduce MLX5_TUNNEL_PRIO_GET macro.
Fixes: 4a78c88e3b ("net/mlx5: fix Verbs flow tunnel")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
Several source files include Verbs header files as in (1). These source
files will not compile under non-Linux operating systems. This commit
removes this inclusion in two cases:
Case 1: There is no usage of ibv_* or mlx5dv_* symbols in the source
file so the inclusion in (1) can be safely removed.
Case 2: Verbs symbols are used. Please note the inclusion in (1) already
appears in file linux/mlx5_glue.h (which represents the interface
to the rdma-core library). Therefore, replace (1) in the source file
with (2). Under non-Linux operating systems - file mlx5_glue.h will not
include (1).
(1)
#include <infiniband/verbs.h>
#include <infiniband/mlx5dv.h>
(2)
#include <mlx5_glue.h>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Function calls mlx5_flow_adjust_priority() and
mlx5_flow_discover_priorities() are Verbs based. Move them from file
mlx5_flow.c to file mlx5_flow_verbs.c
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
As part of the effort to support DPDK on Windows and other OS,
rename from IB related name to generic name.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, when allocate a new counter, it needs loop the whole
container pool list to get a free counter.
In the case with millions of counters allocated, and all the pools
are empty, allocate the new counter will still need to loop the
whole container pool list first, then allocate a new pool to get a
free counter. It wastes the cycles during the pool list traversal.
Add a global free counter list in the container helps to get the free
counters more efficiently.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>