Commit Graph

17212 Commits

Author SHA1 Message Date
Jerin Jacob
e3eb65cab3 table: fix arm64 hash function selection
Use CRC32 instruction only when it is available to avoid
the build issue like below.

{standard input}:16: Error:
selected processor does not support `crc32cx w3,w3,x0'

Fixes: ea7be0a038 ("lib/librte_table: add hash function headers")
Cc: stable@dpdk.org

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2019-04-10 22:04:37 +02:00
Thomas Monjalon
bdcfcceb7a version: 19.05-rc1
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2019-04-05 21:58:09 +02:00
Pavan Nikhilesh
32941b5d52 app/testpmd: allocate txonly packets per bulk
Use mempool bulk get ops to alloc burst of packets and process them.
If bulk get fails fallback to rte_mbuf_raw_alloc.

Tested-by: Yingya Han <yingyax.han@intel.com>
Suggested-by: Andrew Rybchenko <arybchenko@solarflare.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 19:28:22 +02:00
Pavan Nikhilesh
01b645dcff app/testpmd: move txonly prepare in separate function
Move the packet prepare logic into a separate function so that it
can be reused later.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 19:28:22 +02:00
Pavan Nikhilesh
561ddcf8d0 app/testpmd: allocate txonly segments per bulk
Use bulk ops for allocating segments instead of having a inner loop
for every segment.
This reduces the number of calls to the mempool layer.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 19:28:22 +02:00
Pavan Nikhilesh
e54ac3b1d9 app/testpmd: move header generation outside txonly loop
Testpmd txonly copies the src/dst mac address of the port being
processed to ethernet header structure on the stack for every packet.
Move it outside the loop and reuse it.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 19:28:22 +02:00
Andrew Rybchenko
06b186a01d net/sfc: improve Rx free threshold default
Rx refill in one bulk (which is just 8 descriptors) by default is too
aggressive and makes too many MMIO writes (Rx doorbells) if packet rate
is high. Setting default to 1/8 of Rx descriptors number shows good
performance results. Anyway it is a default value which may be
overridden by Rx configuration provided by application.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Tom Barbette
eafdc86f34 ethdev: document mask requirements for RETA
Clarify the fact that mask bits should be set in rte_eth_reta_query.

Signed-off-by: Tom Barbette <barbette@kth.se>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 17:45:22 +02:00
Nithin Dabilpuram
6b1ad5181c app/testpmd: fix Tx QinQ set
Enable DEV_TX_OFFLOAD_VLAN_INSERT also along with
DEV_TX_OFFLOAD_VLAN_QINQ in tx_qinq_set() as it takes
both vlan id's as arguments.

Fixes: 597f9fafe1 ("app/testpmd: convert to new Tx offloads API")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 17:45:22 +02:00
Nithin Dabilpuram
368ba98aea app/testpmd: fix Tx VLAN and QinQ dependency
Tx VLAN & QinQ insert enable need not depend on
Rx VLAN offload ETH_VLAN_EXTEND_OFFLOAD. For Tx VLAN
insert enable, error check is now to see if QinQ was enabled
but only single VLAN id is set.

Fixes: 6a34f91690 ("app/testpmd: fix error message when setting Tx VLAN")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 17:45:22 +02:00
Qiming Yang
1aec68d134 app/testpmd: add VXLAN-GPE
This patch added new item "vxlan-gpe" to tunnel_type to
support new VXLAN-GPE packet type, and its classification.

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 17:45:22 +02:00
Qiming Yang
414e31cd6f net/i40e: support VXLAN-GPE classification
Added VXLAN-GPE tunnel filter, supported filter to queue.

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2019-04-05 17:45:22 +02:00
Qiming Yang
9d17021648 net/i40e: support VXLAN-GPE
Add new protocol type VXLAN-GPE support for UDP tunnel.
inner IP/TCP/UDP checksum and RSS configuration shared
the same implementation of VXLAN.

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2019-04-05 17:45:22 +02:00
Qiming Yang
b59eadcd22 ethdev: add VXLAN-GPE tunnel type
This patch added VXLAN-GPE macro in rte_eth_tunnel_type.

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 17:45:22 +02:00
David Harton
f9409d6c28 net/ixgbevf: remove MTU setting limitation
Currently, if requested MTU is bigger than mbuf size and scattered
receive is not enabled, setting MTU to that value fails.

This patch allows setting this special MTU when device is stopped,
because scattered_rx will be re-configured during next port start
and driver may enable scattered receive according new MTU value.

After this patch, driver may select different receive function
automatically after MTU set, according MTU values selected.

Signed-off-by: David Harton <dharton@cisco.com>
Reviewed-by: Wei Zhao <wei.zhao1@intel.com>
2019-04-05 17:45:22 +02:00
Qi Zhang
9481b0902e net/ice: send driver version to firmware
The driver must send its version information to the firmware, so
the firmware knows the driver is up. Otherwise, it will cause unexpected
OS package downloading when multiple driver instances running on the
same device.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2019-04-05 17:45:22 +02:00
Wei Zhao
646ca3a15e net/ixgbe: enable 10Mb/s link setup for X553
This patch enable 10Mb/s link for ixgbe X553.
This new device has own device id of 0x15E4 and 0x15E5, so
ixgbe PMD driver need to special check when setup link for
these two types of device.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2019-04-05 17:45:22 +02:00
Viacheslav Ovsiienko
79e35d0d59 net/mlx5: share Direct Rules/Verbs flow related structures
Direct Rules/Verbs related structures are moved to
the shared context:
  - rx/tx namespaces, shared by master and representors
  - rx/tx flow tables
  - matchers
  - encap/decap action resources
  - flow tags (MARK actions)
  - modify action resources
  - jump tables

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Viacheslav Ovsiienko
b2177648b8 net/mlx5: add Direct Rules flow data alloc/free routines
We are going to share the Direct Rules and Direct Verbs flow
device data structures between master and representors in the
E-Switch configurations over multiport IB device.

The code of initializing and destroying these data is
moved to dedicated routines, this is just a preparation
step for actual data sharing.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Jasvinder Singh
5024da517d net/softnic: fix unchecked return value
Fix unchecked return value issue reported by Coverity.

Coverity issue: 336852
Fixes: a958a5c07f ("net/softnic: support service cores")
Cc: stable@dpdk.org

Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Rami Rosen <ramirose@gmail.com>
2019-04-05 17:45:22 +02:00
Bill Hong
9ea1b3ccfc net: fix Tx VLAN flag for offload emulation
A PMD might use rte_vlan_insert to implement Tx VLAN offload. Typically
the PMD will insert the VLAN header in the transmit path and then
attempt to send the packets. If this fails, the packets are returned to
the application which may attempt to send these packets again. If the
PKT_TX_VLAN flag is not cleared, the transmit path may attempt to insert
the VLAN header again.

Fixes: 47aa48b969 ("net: fix stripped VLAN flag for offload emulation")
Cc: stable@dpdk.org

Signed-off-by: Bill Hong <bhong@brocade.com>
Signed-off-by: Chas Williams <chas3@att.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-04-05 17:45:22 +02:00
Xiaolong Ye
f1debd77ef net/af_xdp: introduce AF_XDP PMD
Add a new PMD driver for AF_XDP which is a proposed faster version of
AF_PACKET interface in Linux. More info about AF_XDP, please refer to [1]
[2].

This is the vanilla version PMD which just uses a raw buffer registered as
the umem.

[1] https://fosdem.org/2018/schedule/event/af_xdp/
[2] https://lwn.net/Articles/745934/

Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
2019-04-05 17:45:22 +02:00
Ori Kam
684b9a1b1f net/mlx5: support jump action
When using Direct Rules we can add actions to jump between tables.
This is extra useful since rule insertion rate is much higher on other
tables compared to table zero.

If no group is selected the rule is added to group 0.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Ori Kam
4f84a19779 net/mlx5: add Direct Rules API
Adds calls to the Direct Rules API inside the glue functions.
Due to difference in parameters between the Direct Rules and Direct
Verbs some of the glue functions API was updated.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Ori Kam
cbb66daa3c net/mlx5: prepare Direct Verbs for Direct Rule
This is the first patch of a series that is designed to enable the
Direct Rules API.

The main difference between Direct Verbs and Direct Rules from API
perspective is that in Direct Rules each action has it's own create
function and the object itself is of type void.

In this patch I'm adding functions to generate actions that currently
are done without create action, and I'm changing the action type to be
void *, so in next patches only the glue functions will need to change.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Ivan Malov
c1ce2ba218 net/sfc: support tunnel TSO on EF10 native Tx datapath
Handle VXLAN and GENEVE TSO on EF10 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Ivan Malov
9906cb2959 net/sfc: improve log about missing HW TSO support
Said message cannot be considered as warning since
the PMD anyway reports available offload capabilities
by means of device info interface. Make this log
message informational and improve its formatting
by placing the text itself on the same line.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Ivan Malov
41ef1ad50a net/sfc: factor out function to get IPv4 packet ID for TSO
As a result, code duplication will be avoided in the current
TSO implementations (EFX and EF10 native). The future patch to
add support for tunnel TSO will also reuse the new function.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
9b70500cf2 net/sfc: add TSO header length check to Tx prepare
Make Tx prepare function able to detect packets with invalid header
size when header linearization is required.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
f7a66f9365 net/sfc: introduce descriptor space check in Tx prepare
Add descriptor space check to Tx prepare function to inform a caller
that a packet that needs more than maximum Tx descriptors of a queue
can not be sent.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
a3895ef38c net/sfc: move TSO header checks from Tx burst to Tx prepare
Tx offloads checks should be done in Tx prepare.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
8c27fa78f1 net/sfc: support Tx preparation in EF10 simple datapath
Implement tx_prepare callback. The implementation checks for anything
only in RTE debug mode. No checks are done otherwise because EF10
simple datapath ignores Tx offloads.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
67330d32b8 net/sfc: support Tx preparation in EF10 datapath
Implement tx_prepare callback and update Tx burst function accordingly.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
07685524c9 net/sfc: support Tx preparation in EFX datapath
Implement generic checks in Tx prepare function and update Tx burst
function accordingly.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
11c3712fc9 net/sfc: make TSO descriptor numbers EF10-specific
Numbers of extra descriptors required for TSO are EF10-specific
in fact. Highlight it in define names.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
3985802ecf net/sfc: improve TSO header length check in EF10 datapath
Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.

Fixes: 6bc985e411 ("net/sfc: support TSO in EF10 Tx datapath")
Cc: stable@dpdk.org

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Igor Romanov
ac30699e9c net/sfc: improve TSO header length check in EFX datapath
Move the check inside xmit function to the branch in which
the check is mandatory. It makes case when TSO header is not
fragmented a bit more faster.

Fixes: fec33d5bb3 ("net/sfc: support firmware-assisted TSO")
Cc: stable@dpdk.org

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-04-05 17:45:22 +02:00
Ori Kam
f54aeb3ec0 net/mlx5: fix flow counters using devx
The API that was defined in OFED 4.5 was replaced both in OFED 4.6 and
in upstream.

This commit updates the API to match the upstream one.

Fixes: f5bf91de73 ("net/mlx5: support flow counters using devx")
Cc: stable@dpdk.org

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Thomas Monjalon
13302cd5bd app/testpmd: use port sibling iterator in device cleanup
When removing a rte_device on a port-based request,
all the sibling ports must be marked as closed.
The iterator loop can be simplified by using the dedicated macro.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-04-05 17:45:22 +02:00
Thomas Monjalon
d874a4eed5 net/mlx5: use port sibling iterators
Iterating over siblings was done with RTE_ETH_FOREACH_DEV()
which skips the owned ports.
The new iterators RTE_ETH_FOREACH_DEV_SIBLING()
and RTE_ETH_FOREACH_DEV_OF() are more appropriate and more correct.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
2019-04-05 17:45:22 +02:00
Thomas Monjalon
7f98942886 ethdev: add siblings iterators
If multiple ports share the same hardware device (rte_device),
they are siblings and can be found thanks to the new functions
and loop macros.
One iterator takes a port id as reference,
while the other one directly refers to the parent device.

The ownership is not checked because siblings may have
different owners.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-04-05 17:45:22 +02:00
Thomas Monjalon
a52aa40b41 ethdev: simplify port state comparisons
There are three states for an ethdev port.
Checking that the port is unused looks simpler than
checking it is neither attached nor removed.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
0b259b8e96 net/mlx4: enable secondary process to register DMA memory
The Memory Region (MR) for DMA memory can't be created from secondary
process due to lib/driver limitation. Whenever it is needed, secondary
process can make a request to primary process through the EAL IPC
channel (rte_mp_msg) which is established on initialization. Once a MR
is created by primary process, it is immediately visible to secondary
process because the MR list is global per a device. Thus, secondary
process can look up the list after the request is successfully returned.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
f4efc0eb97 net/mlx4: add control of excessive memory pinning by kernel
A new PMD parameter (mr_ext_memseg_en) is added to control extension of
memseg when creating a MR. It is enabled by default.

If enabled, mlx4_mr_create() tries to maximize the range of MR
registration so that the LKey lookup tables on datapath become smalle
and get the best performance. However, it may worsen memory utilization
because registered memory is pinned by kernel driver. Even if a page in
the extended chunk is freed, that doesn't become reusable until the
entire memory is freed and the MR is destroyed.

To make freed pages available immediately, this parameter has to be
turned off but it could drop performance.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
c18cf501a7 net/mlx5: enable secondary process to register DMA memory
The Memory Region (MR) for DMA memory can't be created from secondary
process due to lib/driver limitation. Whenever it is needed, secondary
process can make a request to primary process through the EAL IPC
channel (rte_mp_msg) which is established on initialization. Once a MR
is created by primary process, it is immediately visible to secondary
process because the MR list is global per a device. Thus, secondary
process can look up the list after the request is successfully returned.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
dceb502942 net/mlx5: add control of excessive memory pinning by kernel
A new PMD parameter (mr_ext_memseg_en) is added to control extension of
memseg when creating a MR. It is enabled by default.

If enabled, mlx5_mr_create() tries to maximize the range of MR
registration so that the LKey lookup tables on datapath become smaller
and get the best performance. However, it may worsen memory utilization
because registered memory is pinned by kernel driver. Even if a page in
the extended chunk is freed, that doesn't become reusable until the
entire memory is freed and the MR is destroyed.

To make freed pages available immediately, this parameter has to be
turned off but it could drop performance.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
207fe7ac72 net/mlx5: fix external memory registration
Secondary process is not allowed to register MR due to a restriction of
library and kernel driver.

Fixes: 7e43a32ee0 ("net/mlx5: support externally allocated static memory")
Cc: stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
3d1f3c7c83 net/mlx: remove debug messages on datapath
Cc: stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
0203d33a10 net/mlx4: support secondary process
In order to support secondary process, a few features are required.

a) rdma-core library should allocate device resources using DPDK's
   memory allocator.

b) UAR should be remapped for secondary processes. Currently, in order
   not to use different data structure for secondary processes, PMD
   tries to reserve identical virtual address space for both primary
   and secondary processes.

c) IPC channel is necessary, which can be easily set with rte_mp APIs.
   Through the channel, Verbs command FD is delivered to the secondary
   process and the device stop/start event is also broadcast from
   primary process.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00
Yongseok Koh
8e49376400 net/mlx4: add external allocator for Verbs object
To support secondary process, the memory allocated by library such as
completion rings (CQ) and buffer rings (WQ) must be manageable by EAL,
in order to share it with secondary processes. With new changes in
rdma-core and kernel driver, it is possible to provide an external
allocator to the library layer for this purpose. All such resources
will now be allocated within DPDK framework.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2019-04-05 17:45:22 +02:00