This patch separates Tx function declarations to different header file
in preparation for removing their implementation from the source file
and as an optional preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mlx5_rxtx.c file contains a lot of Tx burst functions, each of those
is performance-optimized for the specific set of requested offloads.
These ones are generated on the basis of the template function and it
takes significant time to compile, just due to a large number of giant
functions generated in the same file and this compilation is not being
done in parallel with using multithreading.
Therefore we can split the mlx5_rxtx.c file into several separate files
to allow different functions to be compiled simultaneously.
In this patch, we separate Rx function declarations to different header
file in preparation for removing them from the source file and as an
optional preparation step for further consolidation of Rx burst
functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In verbs, an empty VLAN is equivalent to a packet without VLAN layer,
hence, the VLAN item should not be empty and this case is rejected.
However, the case for ether type of VLAN without following VLAN item
was not validated, allowing the creation of a flow with empty
VLAN item.
To fix this issue a validation was added requiring ether type of VLAN
will be followed with VLAN item.
Fixes: 0b1edd21cd ("net/mlx5: refuse empty VLAN flow specification")
Cc: stable@dpdk.org
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, the maximal flow priority in non-root table to user
is 4, it's not enough for user to do some flow match by priority,
such as LPM, for one IPV4 address, we need 32 priorities for each
bit of 32 mask length.
PMD will manage 3 sub-priorities per user priority according to L2,
L3 and L4. The internal priority is 16 bits, user can use priorities
from 0 - 21843.
Those enlarged flow priorities are only used for ingress or egress
flow groups greater than 0 and for any transfer flow group.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
Use macro HAVE_INFINIBAND_VERBS_H to successfully compile files both
under Linux and Windows (or any non Linux in general). Under Windows
this macro:
1. Hides Verbs references.
2. Exposes required DV structs that are under ifdefs related to rdma
core.
Linux code under definitions such as #ifdef HAVE_IBV_FLOW_DV_SUPPORT is
required unconditionally under Windows however those definitions are
never effective without rdma-core presence. Therefore update the #ifdef
condition to consider HAVE_INFINIBAND_VERBS_H as well (undefined macro
when running without an rdma-core library).
For example:
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If xmedata mode 1 enabled and create a flow with RSS and mark action,
there was an error that rdma-core failed to create RQT due to wrong
queue definition. This was due to mixed flow creation in thread specific
flow workspace.
This patch introduces nested flow workspace(context data), each flow
uses dedicate flow workspace, pop and restore workspace when nested flow
creation done, the original flow with continue with original flow
workspace. The total number of thread specific flow workspace should be
2 due to only one nested flow creation scenario so far.
Fixes: 8bb81f2649 ("net/mlx5: use thread specific flow workspace")
Fixes: 3ac3d8234b ("net/mlx5: fix index when creating flow")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
As shared RSS action will be shared by multiple flows, the action
is created as global standalone action and managed only by the
relevant shared action management functions.
Currently, hrxqs will be created by shared RSS action or general
queue action. For hrxqs created by shared RSS action, they should
also only be released with shared RSS action. It's not correct to
release the shared RSS action hrxqs as general queue actions do
in flow destroy.
This commit adds a new fate action type for shared RSS action to
handle the shared RSS action hrxq release correctly.
Fixes: e1592b6c4d ("net/mlx5: make Rx queue thread safe")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Recent patch uses a local string array as input for function
rte_flow_error_set().
This stack memory may be later used by other code sections,
overwriting the desired error string.
This patch implements an error string for the specific case
requested, of ICMP item not supported in Verbs flow engine.
Fixes: d51475d1bf ("net/mlx5: support item type error message in flow Verbs")
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit applies the cache linked list to Rx queue to make it thread
safe.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit creates the global drop action for flows instead of
maintain it in flow insertion time. The uniqueu global drop action
makes it thread safe.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
As part of multi-thread flow support, this patch moves flow intermediate
data to thread specific, makes them a flow workspace. The workspace is
allocated per thread, destroyed along with thread life-cycle.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When creating a flow, the rule itself might not take effort
immediately once the function call returns with success. It would
take some time to let the steering synchronize with the hardware.
If the application wants the packet to be sent to hit the flow after
it is created, this flow sync API can be used to clear the steering
HW cache to enforce next packet hits the latest rules.
For TX, usually the NIC TX domain and/or the FDB domain should be
synchronized depends in which domain the flow is created.
The application could also try to synchronize the NIC RX and/or the
FDB domain for the ingress packets.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Update the flow verbs error message to "item type X not supported",
when it is not supported,
instead of a generic error message "item not supported".
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by
patch [1].
Using these fields, the application can match all the VLAN options by
single flow: any, VLAN only and non-VLAN only.
Add the support for the fields.
By the way, add the support for QinQ packets matching.
VLAN\QinQ limitations are listed in the driver document.
[1] https://patches.dpdk.org/patch/80965/
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Implement modification for hashed table of Rx queue object (see
mlx5_hrxq_modify()). This implementation relies on the capability to
modify TIR object via DevX API, i.e. current implementation doesn't
support verbs HW object operations. The functionality to modify hashed
table of Rx queue object is prerequisite to implement
rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Counter extend memory was allocated for non-batch counter to save the
extra DevX object. Currently, for non-batch counter which does not
support aging, entry in the generic counter struct is used only when
counter is free in free list, and bytes in the struct is used only when
counter is allocated in using.
In this case, the DevX object can be saved to the generic counter struct
union with entry memory when counter is allocated and union with bytes
when counter is free.
And pool type is also not needed as non-fallback mode only has generic
counter and aging counter, just a bit to indicate the pool is aged or
not will be enough.
This eliminates the counter extend info struct saves the memory.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The shared counters save the counter index to three level table. As
three level table supports multiple-thread operations now, the shared
counters can take advantage of the table to support multiple-thread.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, counter operations are not thread safe as the counter
pools' array resize is not protected.
This commit protects the container pools' array resize using a spinlock.
The original counter pool statistic memory allocate is moved to the
host thread in order to minimize the critical section. Since that pool
statistic memory is required only in query time. The container pools'
array should be resized by the user threads, the new pool may be used
by other rte_flow APIs before the host thread resize is done, if the
pool is not saved to the pools' array, the specified counter memory will
not be found as the pool is not saved to the counter management pool
array. The pool raw statistic memory will be filled in host thread.
The shared counters will be protected in other commit.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
A flow counter which was allocated by a batch API couldn't be assigned
to a flow in the root table (group 0) in old rdma-core version.
Hence, a root table flow counter required PMD mechanism to manage
counters which were allocated singly.
Currently, the batch counters have already been supported in root table
includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET
enum and with a kernel driver includes
MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.
When the PMD uses rdma-core API to assign a batch counter to a root
table flow using invalid counter offset, it should get an error only
if the batch counter assignment for root table is supported.
Using this trial in the initialization time can help to detect the
support.
Using the above trial, if the support is valid, remove the management of
single counter container in the fast counter mechanism. Otherwise, move
the counter mechanism to fallback mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Instead of using special memory to indicate shared counter, this patch
does the optimization to use the counter handler reserved memory to
indicate it. The counter index with MLX5_CNT_SHARED_OFFSET means the
shared counter.
This patch is also an arrangement for a new adjustment to use batch
counter as shared counter.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Commit [1] introduced different container for the aging counter
pools. In order to save container memory the aging counter pools
can be located in the general pool container.
This patch locates the aging counter pools in the general pool
container. Remove the aging container management.
[1] commit fd143711a6 ("net/mlx5: separate aging counter pool range")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds to MLX5 PMD the support of matching on IPv4
fragmented and non-fragmented packets, using the IPv4 header
fragment_offset field.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Move Rx queue drop action similar resources allocations from Verbs
module to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Rx queue drop creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Move Rx hash queue object similar resources allocations from DevX and
Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Rx hash queue creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Separate Rx indirection table object creation into both Verbs and DevX
modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
PMD flow priority is different from application flow priority. Flow
rules with higher match granularity assigned higher PMD priority. Also
PMD splits internally RSS flows according to flow RSS layer.
Final PMD flow rule priority derived from the last match item network
level, after PMD adjusts flow rule, where L4 match gets the highest
priority and L2 the lowest.
The patch adjusts tunnels flow rule priority calculation for PMDs
running verb API.
Introduce MLX5_TUNNEL_PRIO_GET macro.
Fixes: 4a78c88e3b ("net/mlx5: fix Verbs flow tunnel")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
Several source files include Verbs header files as in (1). These source
files will not compile under non-Linux operating systems. This commit
removes this inclusion in two cases:
Case 1: There is no usage of ibv_* or mlx5dv_* symbols in the source
file so the inclusion in (1) can be safely removed.
Case 2: Verbs symbols are used. Please note the inclusion in (1) already
appears in file linux/mlx5_glue.h (which represents the interface
to the rdma-core library). Therefore, replace (1) in the source file
with (2). Under non-Linux operating systems - file mlx5_glue.h will not
include (1).
(1)
#include <infiniband/verbs.h>
#include <infiniband/mlx5dv.h>
(2)
#include <mlx5_glue.h>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Function calls mlx5_flow_adjust_priority() and
mlx5_flow_discover_priorities() are Verbs based. Move them from file
mlx5_flow.c to file mlx5_flow_verbs.c
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
As part of the effort to support DPDK on Windows and other OS,
rename from IB related name to generic name.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, when allocate a new counter, it needs loop the whole
container pool list to get a free counter.
In the case with millions of counters allocated, and all the pools
are empty, allocate the new counter will still need to loop the
whole container pool list first, then allocate a new pool to get a
free counter. It wastes the cycles during the pool list traversal.
Add a global free counter list in the container helps to get the free
counters more efficiently.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
When create the Verbs flows with counter, randomly SEGSEV will also
comes. The reason is that the counter pool memory is not allocated
sufficiently and initialized correctly in Verbs case.
As the mlx5_flow_counter array member is moved out of the counter pool
struct, the counter pool memory layout currently contain implicitly
with mlx5_flow_counter, mlx5_age_param(if the pool is an age pool),
mlx5_flow_counter_ext(if the pool is a none batch pool). When allocate
the pool memory, the pool size should be calculated based on the pool
type accordingly.
Currently, for Verbs counter pool, both mlx5_flow_counter and
mlx5_flow_counter_ext need to be taken into account in the pool size.
And the pool type should also be initialized as CNT_POOL_TYPE_EXT.
This patch add the missing size and type for the Verbs counter pool.
Fixes: 8d93c830e4 ("net/mlx5: modify ext-counter memory allocation")
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The design of counter container resize used double buffer algorithm in
order to synchronize between the query thread to the control thread.
When the control thread detected resize need, it created new bigger
buffer for the counter pools in a new container and change the container
index atomically.
In case the query thread had not detect the previous resize before a new
one need was detected by the control thread, the control thread returned
EAGAIN to the flow creation API used a COUNT action.
The rte_flow API doesn't allow unblocked commands and doesn't expect to
get EAGAIN error type.
So, when a lot of flows were created between 2 different periodic
queries, 2 different resizes might try to be created and caused EAGAIN
error.
This behavior may blame flow creations.
Change the synchronization way to use lock instead of double buffer
algorithm.
The critical section of this lock is very small, so flow insertion
rate should not be decreased.
Fixes: ebbac312e4 ("net/mlx5: resize a full counter container")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When creating flow rule with zero specs it will cause
matching all UDP packets like following:
eth / ipv4 / udp / vxlan / end
Such rule will match all udp packets.
This change the behavior to match the dv flow engine
which will automatically set the match on relative
outer UDP port if the user didn't specify any.
Fixes: 84c406e745 ("net/mlx5: add flow translate function")
Cc: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The flow_verbs_translate() function accumulates hash fields while
iterating through the flow items (SRC_IPV4, DST_IPV4, SRC_IPV6,
DST_IPV6, SRC_PORT_TCP, DST_PORT_TCP, SRC_PORT_UDP, DST_PORT_UDP).
Before this commit the dev_flow handle structure was reused in each new
flow_verbs_translate() call, however the dev_flow->hash_fields variable
was not reset before each call. As a result hash_fields from previous
calls remained present in the current flow which lead to invalid
combinations (e.g. simultaneous IPv4 and IPv6 specs). This scenario
happens for example in the next flows sequence, when running in verbs
mode (dv_flow_en=0).
flow create 0 ingress group 0 pattern eth / ipv4 / end <rss actions>
flow create 0 ingress group 0 pattern eth / ipv6 / end <rss actions>
The fix is to reset dev_flow->hash_fields in flow_verbs_prepare().
Fixes: e7bfa3596a ("net/mlx5: separate the flow handle resource")
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, there is no flow aging check and age-out event callback
mechanism for mlx5 driver, this patch implements it. It's included:
- Splitting the current counter container to aged or no-aged container
since reducing memory consumption. Aged container will allocate extra
memory to save the aging parameter from user configuration.
- Aging check and age-out event callback mechanism based on current
counter. When a flow be checked aged-out, RTE_ETH_EVENT_FLOW_AGED
event will be triggered to applications.
- Implement the new API: rte_flow_get_aged_flows, applications can use
this API to get aged flows.
Signed-off-by: Dong Zhou <dongz@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, the counter pool needs 512 ext-counter memory for no batch
counters, it's allocated separately by once, behind the 512
basic-counter memory. This is not easy to get ext-counter pointer by
corresponding basic-counter pointer. This is also no easy for expanding
some other potential additional type of counter memory.
So, need allocate every one of ext-counter and basic-counter together,
as a single piece of memory. It's will be same for further additional
type of counter memory. In this case, one piece of memory contains all
type of memory for one counter, it's easy to get each type memory by
using offsetting.
Signed-off-by: Dong Zhou <dongz@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The header modify actions number supported now has some limitation,
and it is decided by both driver and hardware. If the configuration
is different or the table to insert the flow is different, the result
might be different if the flow contains header modify actions.
Currently, the actual action number could only be calculated in the
later stage called translate, from user specified value to the driver
format. And the action numbers checking is missed in the flow
validation. So PMD will return incorrect result to indicate the
flow actions are valid by rte_flow_validate but then it will fail
when calling rte_flow_create.
Adding some simple checking in the validation will help to get rid
of this incorrect checking. Most of the actions will only consume 1
SW action field except the MAC address and IPv6 address. And from
SW POV, the maximal action fields for these will be consumed even if
only part of such field will be modified because that there is no
mask in the flow actions and the mask will always be all ONEs.
The metering or extra metadata supports will cost one more action.
Fixes: 9597330c68 ("net/mlx5: update modify header action translator")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When destroy the flow with RSS, flow can invoke the queues information
from hrxq index table object, since the queue number and list are both
saved to the index table object. No need to save the duplicated data in
rte flow.
Save the RSS description information to the intermediate private data
when create the flow with RSS action helps to save the memory for rte
flow.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the mlx5_flow_handle struct is not fully aligned and has some
bits wasted. The members can be optimized and reorganized to save memory.
1. As metadata and meter is sharing the same flow match id, now the flow
id is limited to 24 bits due to the 8 MSBs are used as for the meter
color. Align the flow id to other bit members to 32 bits to save the
mlx5 flow handle memory.
2. The vlan_vf in struct mlx5_flow_handle_dv was already moved to struct
mlx5_flow_handle. Remove the legacy vlan_vf in struct
mlx5_flow_handle_dv.
3. Reorganize the vlan_vf in mlx5_flow_handle with member SILIST_ENTRY
next to make it align with 8 bytes.
4. Reorganize the header modify in mlx5_flow_handle_dv to ILIST_ENTRY
next to make it align to with bytes.
5. Introduce __rte_pack attribute to make the struct tightly organized.
It will totally save 20 bytes memory for mlx5_flow_handle struct.
For the resource objects which are converted to indexed, align the names
with the prefix of rix_.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
As only limited bits is used in act_flags for flow destroy, it's a bit
expensive to save the whole 64 bits. Move the act_flags out of flow
handle and save the needed bits for flow destroy to save some bytes for
the flow handle data struct.
The fate action type and mark bits are reserved as they will be used in
flow destroy.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, one flow only has one fate action, the fate actions members
in the flow struct can be reorganized as union to save the memory for
flow struct.
This commit reorganizes the fate actions as union, the act_flags helps
to identify the fate action type when flow destroys.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts flow dev handle to indexed.
Change the mlx5 flow handle from pointer to uint32_t saves memory for
flow. With million flow, it saves several MBytes memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts hrxq to indexed.
Using the uint32_t index instead of pointer saves 4 bytes memory for the
flow handle. For millions flows, it will save several MBytes of memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When creating a flow, usually the creating routine is called in
serial. No parallel execution is supported right now. The same
function will be called only once for a single flow creation.
But there is a special case that the creating routine will be called
nested. If the xmeta feature is enabled and there is FLAG / MARK in
the actions list, some metadata reg copy flow needs to be created
before the original flow is applied to the hardware.
In the flow non-cached mode, resources only for flow creation will
not be saved anymore. The memory space is pre-allocated and reused
for each flow. A global index for each device is used to indicate
the memory address of the resources. If the function is called in a
nested mode, then the index will be reset and make everything get
corrupted.
To solve this, a nested index is introduced to save the position for
the original flow creation. Currently, only one level nested call
of the flow creating routine is supported.
Fixes: e7bfa3596a ("net/mlx5: separate the flow handle resource")
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>