2018-01-23 14:21:16 +00:00
|
|
|
# SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
# Copyright(c) 2010-2017 Intel Corporation
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# define executive environment
|
|
|
|
# RTE_EXEC_ENV values are the directories in mk/exec-env/
|
|
|
|
#
|
|
|
|
CONFIG_RTE_EXEC_ENV=
|
|
|
|
|
|
|
|
#
|
|
|
|
# define the architecture we compile for.
|
|
|
|
# RTE_ARCH values are the directories in mk/arch/
|
|
|
|
#
|
|
|
|
CONFIG_RTE_ARCH=
|
|
|
|
|
|
|
|
#
|
|
|
|
# machine can define specific variables or action for a specific board
|
|
|
|
# RTE_MACHINE values are the directories in mk/machine/
|
|
|
|
#
|
|
|
|
CONFIG_RTE_MACHINE=
|
|
|
|
|
|
|
|
#
|
|
|
|
# The compiler we use.
|
|
|
|
# RTE_TOOLCHAIN values are the directories in mk/toolchain/
|
|
|
|
#
|
|
|
|
CONFIG_RTE_TOOLCHAIN=
|
|
|
|
|
|
|
|
#
|
|
|
|
# Use intrinsics or assembly code for key routines
|
|
|
|
#
|
|
|
|
CONFIG_RTE_FORCE_INTRINSICS=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Machine forces strict alignment constraints.
|
|
|
|
#
|
|
|
|
CONFIG_RTE_ARCH_STRICT_ALIGN=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile to share library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_BUILD_SHARED_LIB=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Use newest code breaking previous ABI
|
|
|
|
#
|
|
|
|
CONFIG_RTE_NEXT_ABI=y
|
|
|
|
|
2017-03-01 09:34:12 +00:00
|
|
|
#
|
|
|
|
# Major ABI to overwrite library specific LIBABIVER
|
|
|
|
#
|
|
|
|
CONFIG_RTE_MAJOR_ABI=
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Machine's cache line size
|
|
|
|
#
|
|
|
|
CONFIG_RTE_CACHE_LINE_SIZE=64
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile Environment Abstraction Layer
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_EAL=y
|
|
|
|
CONFIG_RTE_MAX_LCORE=128
|
|
|
|
CONFIG_RTE_MAX_NUMA_NODES=8
|
2018-10-02 13:34:41 +00:00
|
|
|
CONFIG_RTE_MAX_HEAPS=32
|
mem: replace memseg with memseg lists
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, but multiple lists. To be precise,
each hugepage size available on the system gets one or more memseg
lists, per socket.
In order to support dynamic memory allocation, we reserve all
memory in advance (unless we're in 32-bit legacy mode, in which
case we do not preallocate memory). As in, we do an anonymous
mmap() of the entire maximum size of memory per hugepage size, per
socket (which is limited to either RTE_MAX_MEMSEG_PER_TYPE pages or
RTE_MAX_MEM_MB_PER_TYPE megabytes worth of memory, whichever is the
smaller one), split over multiple lists (which are limited to
either RTE_MAX_MEMSEG_PER_LIST memsegs or RTE_MAX_MEM_MB_PER_LIST
megabytes per list, whichever is the smaller one). There is also
a global limit of CONFIG_RTE_MAX_MEM_MB megabytes, which is mainly
used for 32-bit targets to limit amounts of preallocated memory,
but can be used to place an upper limit on total amount of VA
memory that can be allocated by DPDK application.
So, for each hugepage size, we get (by default) up to 128G worth
of memory, per socket, split into chunks of up to 32G in size.
The address space is claimed at the start, in eal_common_memory.c.
The actual page allocation code is in eal_memalloc.c (Linux-only),
and largely consists of copied EAL memory init code.
Pages in the list are also indexed by address. That is, in order
to figure out where the page belongs, one can simply look at base
address for a memseg list. Similarly, figuring out IOVA address
of a memzone is a matter of finding the right memseg list, getting
offset and dividing by page size to get the appropriate memseg.
This commit also removes rte_eal_dump_physmem_layout() call,
according to deprecation notice [1], and removes that deprecation
notice as well.
On 32-bit targets due to limited VA space, DPDK will no longer
spread memory to different sockets like before. Instead, it will
(by default) allocate all of the memory on socket where master
lcore is. To override this behavior, --socket-mem must be used.
The rest of the changes are really ripple effects from the memseg
change - heap changes, compile fixes, and rewrites to support
fbarray-backed memseg lists. Due to earlier switch to _walk()
functions, most of the changes are simple fixes, however some
of the _walk() calls were switched to memseg list walk, where
it made sense to do so.
Additionally, we are also switching locks from flock() to fcntl().
Down the line, we will be introducing single-file segments option,
and we cannot use flock() locks to lock parts of the file. Therefore,
we will use fcntl() locks for legacy mem as well, in case someone is
unfortunate enough to accidentally start legacy mem primary process
alongside an already working non-legacy mem-based primary process.
[1] http://dpdk.org/dev/patchwork/patch/34002/
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 12:30:24 +00:00
|
|
|
CONFIG_RTE_MAX_MEMSEG_LISTS=64
|
|
|
|
# each memseg list will be limited to either RTE_MAX_MEMSEG_PER_LIST pages
|
|
|
|
# or RTE_MAX_MEM_MB_PER_LIST megabytes worth of memory, whichever is smaller
|
|
|
|
CONFIG_RTE_MAX_MEMSEG_PER_LIST=8192
|
|
|
|
CONFIG_RTE_MAX_MEM_MB_PER_LIST=32768
|
|
|
|
# a "type" is a combination of page size and NUMA node. total number of memseg
|
|
|
|
# lists per type will be limited to either RTE_MAX_MEMSEG_PER_TYPE pages (split
|
|
|
|
# over multiple lists of RTE_MAX_MEMSEG_PER_LIST pages), or
|
|
|
|
# RTE_MAX_MEM_MB_PER_TYPE megabytes of memory (split over multiple lists of
|
|
|
|
# RTE_MAX_MEM_MB_PER_LIST), whichever is smaller
|
|
|
|
CONFIG_RTE_MAX_MEMSEG_PER_TYPE=32768
|
|
|
|
CONFIG_RTE_MAX_MEM_MB_PER_TYPE=131072
|
|
|
|
# global maximum usable amount of VA, in megabytes
|
|
|
|
CONFIG_RTE_MAX_MEM_MB=524288
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_MAX_MEMZONE=2560
|
|
|
|
CONFIG_RTE_MAX_TAILQ=32
|
2017-08-24 08:23:10 +00:00
|
|
|
CONFIG_RTE_ENABLE_ASSERT=n
|
2016-11-23 15:34:25 +00:00
|
|
|
CONFIG_RTE_LOG_DP_LEVEL=RTE_LOG_INFO
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_LOG_HISTORY=256
|
2017-03-13 08:59:27 +00:00
|
|
|
CONFIG_RTE_BACKTRACE=y
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_LIBEAL_USE_HPET=n
|
|
|
|
CONFIG_RTE_EAL_ALLOW_INV_SOCKET_ID=n
|
|
|
|
CONFIG_RTE_EAL_ALWAYS_PANIC_ON_ERROR=n
|
|
|
|
CONFIG_RTE_EAL_IGB_UIO=n
|
|
|
|
CONFIG_RTE_EAL_VFIO=n
|
2017-12-29 07:58:55 +00:00
|
|
|
CONFIG_RTE_MAX_VFIO_GROUPS=64
|
2018-04-17 07:06:20 +00:00
|
|
|
CONFIG_RTE_MAX_VFIO_CONTAINERS=64
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_MALLOC_DEBUG=n
|
2017-06-29 05:59:19 +00:00
|
|
|
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
|
eal: support strlcpy function
The strncpy function is error prone for doing "safe" string copies, so
we generally try to use "snprintf" instead in the code. The function
"strlcpy" is a better alternative, since it better conveys the
intention of the programmer, and doesn't suffer from the non-null
terminating behaviour of it's n'ed brethern.
The downside of this function is that it is not available by default
on linux, though standard in the BSD's. It is available on most
distros by installing "libbsd" package.
This patch therefore provides the following in rte_string_fns.h to ensure
that strlcpy is available there:
* for BSD, include string.h as normal
* if RTE_USE_LIBBSD is set, include <bsd/string.h>
* if not set, fallback to snprintf for strlcpy
Using make build system, the RTE_USE_LIBBSD is a hard-coded value to "n",
but when using meson, it's automatically set based on what is available
on the platform.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
2018-03-12 11:32:59 +00:00
|
|
|
CONFIG_RTE_USE_LIBBSD=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
2017-04-27 23:00:14 +00:00
|
|
|
#
|
|
|
|
# Recognize/ignore the AVX/AVX512 CPU flags for performance/power testing.
|
|
|
|
# AVX512 is marked as experimental for now, will enable it after enough
|
|
|
|
# field test and possible optimization.
|
|
|
|
#
|
|
|
|
CONFIG_RTE_ENABLE_AVX=y
|
|
|
|
CONFIG_RTE_ENABLE_AVX512=n
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
# Default driver path (or "" to disable)
|
|
|
|
CONFIG_RTE_EAL_PMD_PATH=""
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile Environment Abstraction Layer to support Vmware TSC map
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=y
|
|
|
|
|
2017-10-26 10:06:08 +00:00
|
|
|
#
|
|
|
|
# Compile the PCI library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PCI=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile the argument parser library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_KVARGS=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile generic ethernet library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_ETHER=y
|
|
|
|
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
|
|
|
|
CONFIG_RTE_MAX_ETHPORTS=32
|
|
|
|
CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
|
|
|
|
CONFIG_RTE_LIBRTE_IEEE1588=n
|
|
|
|
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
|
|
|
|
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
|
2018-07-19 12:21:42 +00:00
|
|
|
CONFIG_RTE_ETHDEV_PROFILE_WITH_VTUNE=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
ethdev: add Tx preparation
Added API for `rte_eth_tx_prepare`
uint16_t rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
Added fields to the `struct rte_eth_desc_lim`:
uint16_t nb_seg_max;
/**< Max number of segments per whole packet. */
uint16_t nb_mtu_seg_max;
/**< Max number of segments per one MTU */
These fields can be used to create valid packets according to the
following rules:
* For non-TSO packet, a single transmit packet may span up to
"nb_mtu_seg_max" buffers.
* For TSO packet the total number of data descriptors is "nb_seg_max",
and each segment within the TSO may span up to "nb_mtu_seg_max".
Added functions:
int
rte_validate_tx_offload(struct rte_mbuf *m)
to validate general requirements for tx offload set in mbuf of packet
such a flag completness. In current implementation this function is
called optionaly when RTE_LIBRTE_ETHDEV_DEBUG is enabled.
int rte_net_intel_cksum_prepare(struct rte_mbuf *m)
to prepare pseudo header checksum for TSO and non-TSO tcp/udp packets
before hardware tx checksum offload.
- for non-TSO tcp/udp packets full pseudo-header checksum is
counted and set.
- for TSO the IP payload length is not included.
int
rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
this function uses same logic as rte_net_intel_cksum_prepare, but
allows application to choose which offloads should be taken into
account, if full preparation is not required.
PERFORMANCE TESTS
-----------------
This feature was tested with modified csum engine from test-pmd.
The packet checksum preparation was moved from application to Tx
preparation step placed before burst.
We may expect some overhead costs caused by:
1) using additional callback before burst,
2) rescanning burst,
3) additional condition checking (packet validation),
4) worse optimization (e.g. packet data access, etc.)
We tested it using ixgbe Tx preparation implementation with some parts
disabled to have comparable information about the impact of different
parts of implementation.
IMPACT:
1) For unimplemented Tx preparation callback the performance impact is
negligible,
2) For packet condition check without checksum modifications (nb_segs,
available offloads, etc.) is 14626628/14252168 (~2.62% drop),
3) Full support in ixgbe driver (point 2 + packet checksum
initialization) is 14060924/13588094 (~3.48% drop)
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2016-12-23 18:40:47 +00:00
|
|
|
#
|
|
|
|
# Turn off Tx preparation stage
|
|
|
|
#
|
2017-05-29 14:56:50 +00:00
|
|
|
# Warning: rte_eth_tx_prepare() can be safely disabled only if using a
|
ethdev: add Tx preparation
Added API for `rte_eth_tx_prepare`
uint16_t rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
Added fields to the `struct rte_eth_desc_lim`:
uint16_t nb_seg_max;
/**< Max number of segments per whole packet. */
uint16_t nb_mtu_seg_max;
/**< Max number of segments per one MTU */
These fields can be used to create valid packets according to the
following rules:
* For non-TSO packet, a single transmit packet may span up to
"nb_mtu_seg_max" buffers.
* For TSO packet the total number of data descriptors is "nb_seg_max",
and each segment within the TSO may span up to "nb_mtu_seg_max".
Added functions:
int
rte_validate_tx_offload(struct rte_mbuf *m)
to validate general requirements for tx offload set in mbuf of packet
such a flag completness. In current implementation this function is
called optionaly when RTE_LIBRTE_ETHDEV_DEBUG is enabled.
int rte_net_intel_cksum_prepare(struct rte_mbuf *m)
to prepare pseudo header checksum for TSO and non-TSO tcp/udp packets
before hardware tx checksum offload.
- for non-TSO tcp/udp packets full pseudo-header checksum is
counted and set.
- for TSO the IP payload length is not included.
int
rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
this function uses same logic as rte_net_intel_cksum_prepare, but
allows application to choose which offloads should be taken into
account, if full preparation is not required.
PERFORMANCE TESTS
-----------------
This feature was tested with modified csum engine from test-pmd.
The packet checksum preparation was moved from application to Tx
preparation step placed before burst.
We may expect some overhead costs caused by:
1) using additional callback before burst,
2) rescanning burst,
3) additional condition checking (packet validation),
4) worse optimization (e.g. packet data access, etc.)
We tested it using ixgbe Tx preparation implementation with some parts
disabled to have comparable information about the impact of different
parts of implementation.
IMPACT:
1) For unimplemented Tx preparation callback the performance impact is
negligible,
2) For packet condition check without checksum modifications (nb_segs,
available offloads, etc.) is 14626628/14252168 (~2.62% drop),
3) Full support in ixgbe driver (point 2 + packet checksum
initialization) is 14060924/13588094 (~3.48% drop)
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2016-12-23 18:40:47 +00:00
|
|
|
# driver which do not implement any Tx preparation.
|
|
|
|
#
|
|
|
|
CONFIG_RTE_ETHDEV_TX_PREPARE_NOOP=n
|
|
|
|
|
2018-10-15 12:01:54 +00:00
|
|
|
#
|
|
|
|
# Common libraries, before Bus/PMDs
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_COMMON_DPAAX=n
|
|
|
|
|
2018-05-11 08:31:29 +00:00
|
|
|
#
|
|
|
|
# Compile the Intel FPGA bus
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_IFPGA_BUS=y
|
|
|
|
|
2017-10-26 10:06:08 +00:00
|
|
|
#
|
|
|
|
# Compile PCI bus driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PCI_BUS=y
|
|
|
|
|
2017-11-07 06:54:21 +00:00
|
|
|
#
|
|
|
|
# Compile the vdev bus
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_VDEV_BUS=y
|
|
|
|
|
2018-01-20 16:50:53 +00:00
|
|
|
#
|
|
|
|
# Compile ARK PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_ARK_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_ARK_PAD_TX=y
|
|
|
|
CONFIG_RTE_LIBRTE_ARK_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_ARK_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_ARK_DEBUG_STATS=n
|
|
|
|
CONFIG_RTE_LIBRTE_ARK_DEBUG_TRACE=n
|
|
|
|
|
2018-04-06 12:36:34 +00:00
|
|
|
#
|
|
|
|
# Compile AMD PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_AXGBE_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_AXGBE_PMD_DEBUG=n
|
|
|
|
|
2018-01-20 16:50:53 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Broadcom PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_BNX2X_PMD=n
|
|
|
|
CONFIG_RTE_LIBRTE_BNX2X_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_BNX2X_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_BNX2X_MF_SUPPORT=n
|
|
|
|
CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented Broadcom BNXT PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_BNXT_PMD=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented Chelsio Terminator (CXGBE) PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_DEBUG=n
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG=n
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX=n
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_CXGBE_TPUT=y
|
|
|
|
|
|
|
|
# NXP DPAA Bus
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA_BUS=n
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA_PMD=n
|
2018-03-14 07:56:04 +00:00
|
|
|
CONFIG_RTE_LIBRTE_DPAA_HWDEBUG=n
|
2018-01-20 16:50:53 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile NXP DPAA2 FSL-MC Bus
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_FSLMC_BUS=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile Support Libraries for NXP DPAA2
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL=n
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA2_USE_PHYS_IOVA=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented NXP DPAA2 PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA2_PMD=n
|
|
|
|
CONFIG_RTE_LIBRTE_DPAA2_DEBUG_DRIVER=n
|
|
|
|
|
2018-10-03 13:36:05 +00:00
|
|
|
#
|
|
|
|
# Compile NXP ENETC PMD Driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_ENETC_PMD=n
|
|
|
|
|
2016-03-17 14:31:18 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Amazon ENA PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_ENA_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_ENA_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_ENA_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_ENA_DEBUG_TX_FREE=n
|
|
|
|
CONFIG_RTE_LIBRTE_ENA_COM_DEBUG=n
|
|
|
|
|
2018-01-20 16:50:53 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Cisco ENIC PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_ENIC_PMD=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented IGB & EM PMD drivers
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_EM_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_IGB_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_E1000_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_E1000_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_E1000_DEBUG_TX_FREE=n
|
|
|
|
CONFIG_RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented IXGBE PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_IXGBE_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
|
|
|
|
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
|
|
|
|
CONFIG_RTE_IXGBE_INC_VECTOR=y
|
2017-05-31 11:10:25 +00:00
|
|
|
CONFIG_RTE_LIBRTE_IXGBE_BYPASS=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented I40E PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
|
2016-03-25 00:47:47 +00:00
|
|
|
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
|
|
|
|
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented FM10K PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_FM10K_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n
|
|
|
|
CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
|
|
|
|
CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
|
|
|
|
|
2018-01-10 13:01:54 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented AVF PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_AVF_PMD=y
|
2018-01-10 13:02:04 +00:00
|
|
|
CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
|
2018-01-10 13:01:56 +00:00
|
|
|
CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
|
|
|
|
CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
|
2018-01-10 13:01:54 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_MLX4_PMD=n
|
|
|
|
CONFIG_RTE_LIBRTE_MLX4_DEBUG=n
|
2018-01-30 15:34:54 +00:00
|
|
|
CONFIG_RTE_LIBRTE_MLX4_DLOPEN_DEPS=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
2018-05-15 06:12:50 +00:00
|
|
|
# Compile burst-oriented Mellanox ConnectX-4, ConnectX-5 & Bluefield
|
|
|
|
# (MLX5) PMD
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_MLX5_PMD=n
|
|
|
|
CONFIG_RTE_LIBRTE_MLX5_DEBUG=n
|
2018-01-30 15:34:58 +00:00
|
|
|
CONFIG_RTE_LIBRTE_MLX5_DLOPEN_DEPS=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile burst-oriented Netronome NFP PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_NFP_PMD=n
|
2017-12-19 06:38:36 +00:00
|
|
|
CONFIG_RTE_LIBRTE_NFP_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_NFP_DEBUG_RX=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
2018-01-20 16:50:53 +00:00
|
|
|
# QLogic 10G/25G/40G/50G/100G PMD
|
2017-10-09 15:00:30 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_QEDE_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_QEDE_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_QEDE_DEBUG_RX=n
|
|
|
|
#Provides abs path/name of the firmware file.
|
|
|
|
#Empty string denotes driver will use default firmware
|
|
|
|
CONFIG_RTE_LIBRTE_QEDE_FW=""
|
2016-06-15 21:23:01 +00:00
|
|
|
|
2016-11-29 16:18:33 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Solarflare libefx-based PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_SFC_EFX_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_SFC_EFX_DEBUG=n
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile software PMD backed by SZEDATA2 device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
|
|
|
|
|
2016-06-17 13:29:35 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Cavium Thunderx NICVF PMD driver
|
|
|
|
#
|
2017-03-19 14:48:48 +00:00
|
|
|
CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
|
2016-06-17 13:29:35 +00:00
|
|
|
CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
|
|
|
|
|
2017-03-25 06:24:12 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Cavium LiquidIO PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_LIO_PMD=y
|
2017-03-25 06:24:14 +00:00
|
|
|
CONFIG_RTE_LIBRTE_LIO_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_LIO_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_LIO_DEBUG_MBOX=n
|
|
|
|
CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
|
2017-03-25 06:24:12 +00:00
|
|
|
|
2017-10-08 12:44:05 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented Cavium OCTEONTX network PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y
|
|
|
|
|
2017-04-11 13:37:08 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile WRS accelerated virtual port (AVP) guest PMD driver
|
2017-04-11 13:49:17 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_AVP_PMD=n
|
|
|
|
CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
|
2017-04-11 13:49:17 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented VIRTIO PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DUMP=n
|
|
|
|
|
2016-06-29 03:20:06 +00:00
|
|
|
#
|
|
|
|
# Compile virtio device emulation inside virtio PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_VIRTIO_USER=n
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile burst-oriented VMXNET3 PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_VMXNET3_PMD=y
|
|
|
|
CONFIG_RTE_LIBRTE_VMXNET3_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_VMXNET3_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_VMXNET3_DEBUG_TX_FREE=n
|
|
|
|
|
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile software PMD backed by AF_PACKET sockets (Linux only)
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile link bonding PMD library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_BOND=y
|
|
|
|
CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB=n
|
|
|
|
CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB_L1=n
|
|
|
|
|
2016-04-27 14:18:38 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile fail-safe PMD
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_FAILSAFE=y
|
2016-04-27 14:18:38 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile Marvell PMD driver
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-03-26 14:38:50 +00:00
|
|
|
CONFIG_RTE_LIBRTE_MVPP2_PMD=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
2018-10-03 07:22:09 +00:00
|
|
|
#
|
|
|
|
# Compile Marvell MVNETA PMD driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_MVNETA_PMD=n
|
|
|
|
|
2017-04-04 19:50:23 +00:00
|
|
|
#
|
2018-07-13 17:06:42 +00:00
|
|
|
# Compile support for VMBus library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_VMBUS=n
|
|
|
|
|
2018-07-13 17:06:43 +00:00
|
|
|
#
|
|
|
|
# Compile native PMD for Hyper-V/Azure
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_NETVSC_PMD=n
|
|
|
|
CONFIG_RTE_LIBRTE_NETVSC_DEBUG_RX=n
|
|
|
|
CONFIG_RTE_LIBRTE_NETVSC_DEBUG_TX=n
|
|
|
|
CONFIG_RTE_LIBRTE_NETVSC_DEBUG_DUMP=n
|
|
|
|
|
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile virtual device driver for NetVSC on Hyper-V/Azure
|
2017-04-04 19:50:23 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_VDEV_NETVSC_PMD=n
|
2017-04-04 19:50:23 +00:00
|
|
|
|
2017-03-28 11:53:56 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile null PMD
|
2017-03-28 11:53:56 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_NULL=y
|
2017-03-28 11:53:56 +00:00
|
|
|
|
2016-12-12 14:38:38 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile software PMD backed by PCAP files
|
2016-12-12 14:38:38 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_PCAP=n
|
2016-12-12 14:38:38 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile example software rings based PMD
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_RING=y
|
|
|
|
CONFIG_RTE_PMD_RING_MAX_RX_RINGS=16
|
|
|
|
CONFIG_RTE_PMD_RING_MAX_TX_RINGS=16
|
2016-03-04 18:11:12 +00:00
|
|
|
|
2017-07-18 12:48:14 +00:00
|
|
|
#
|
2018-01-20 16:50:53 +00:00
|
|
|
# Compile SOFTNIC PMD
|
2017-07-18 12:48:14 +00:00
|
|
|
#
|
2018-07-06 17:21:06 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_SOFTNIC=n
|
2018-01-20 16:50:53 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile the TAP PMD
|
|
|
|
# It is enabled by default for Linux only.
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_TAP=n
|
2017-07-18 12:48:14 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Do prefetch of packet data within PMD driver receive function
|
|
|
|
#
|
|
|
|
CONFIG_RTE_PMD_PACKET_PREFETCH=y
|
|
|
|
|
2018-01-11 19:23:18 +00:00
|
|
|
# Compile generic wireless base band device library
|
|
|
|
# EXPERIMENTAL: API may change without prior notice
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_BBDEV=y
|
|
|
|
CONFIG_RTE_BBDEV_MAX_DEVS=128
|
2018-05-09 14:30:02 +00:00
|
|
|
CONFIG_RTE_BBDEV_OFFLOAD_COST=n
|
2018-01-11 19:23:18 +00:00
|
|
|
|
2018-01-11 19:23:19 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for NULL bbdev device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL=y
|
|
|
|
|
2018-01-11 19:23:20 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for turbo software bbdev device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=n
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile generic crypto device library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_CRYPTODEV=y
|
|
|
|
CONFIG_RTE_CRYPTO_MAX_DEVS=64
|
|
|
|
|
2017-01-18 20:01:54 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for ARMv8 Crypto device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO=n
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO_DEBUG=n
|
|
|
|
|
2018-10-12 14:40:42 +00:00
|
|
|
#
|
|
|
|
# Compile NXP CAAM JR crypto Driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_CAAM_JR=n
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_CAAM_JR_BE=n
|
|
|
|
|
2017-04-20 05:44:16 +00:00
|
|
|
#
|
|
|
|
# Compile NXP DPAA2 crypto sec driver for CAAM HW
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=n
|
|
|
|
|
2017-10-09 14:21:40 +00:00
|
|
|
#
|
|
|
|
# NXP DPAA caam - crypto driver
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DPAA_SEC=n
|
2018-03-14 07:56:04 +00:00
|
|
|
CONFIG_RTE_LIBRTE_DPAA_MAX_CRYPTODEV=4
|
2017-10-09 14:21:40 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-10-09 09:07:34 +00:00
|
|
|
# Compile PMD for Cavium OCTEON TX crypto device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_OCTEONTX_CRYPTO=y
|
|
|
|
|
|
|
|
#
|
2018-07-13 02:28:25 +00:00
|
|
|
# Compile PMD for QuickAssist based devices - see docs for details
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-07-13 02:28:11 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_QAT=y
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2018-06-13 12:14:16 +00:00
|
|
|
# Max. number of QuickAssist devices, which can be detected and attached
|
|
|
|
#
|
|
|
|
CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
|
2018-07-23 13:06:51 +00:00
|
|
|
CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
|
2016-03-04 18:11:12 +00:00
|
|
|
|
2018-04-17 09:23:17 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for virtio crypto devices
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO=y
|
|
|
|
#
|
|
|
|
# Number of maximum virtio crypto devices
|
|
|
|
#
|
|
|
|
CONFIG_RTE_MAX_VIRTIO_CRYPTO=32
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for AESNI backed device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
|
|
|
|
|
2016-10-04 15:11:19 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for Software backed device
|
|
|
|
#
|
2016-10-18 11:36:13 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_OPENSSL=n
|
2016-10-04 15:11:19 +00:00
|
|
|
|
2016-03-10 16:41:46 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for AESNI GCM device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
|
|
|
|
|
2016-03-10 16:33:12 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for SNOW 3G device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_SNOW3G=n
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_SNOW3G_DEBUG=n
|
|
|
|
|
2016-06-20 14:40:04 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for KASUMI device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_KASUMI=n
|
|
|
|
|
2016-09-29 02:59:47 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for ZUC device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_ZUC=n
|
|
|
|
|
2017-01-24 16:23:58 +00:00
|
|
|
# Compile PMD for Crypto Scheduler device
|
|
|
|
#
|
2017-03-29 16:38:54 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y
|
2017-01-24 16:23:58 +00:00
|
|
|
|
2016-03-11 01:04:10 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for NULL Crypto device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
|
|
|
|
|
2018-03-19 12:23:35 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for AMD CCP crypto device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_CCP=n
|
|
|
|
|
2017-10-10 12:17:19 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for Marvell Crypto device
|
|
|
|
#
|
2018-05-08 08:14:04 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_MVSAM_CRYPTO=n
|
2017-10-10 12:17:19 +00:00
|
|
|
|
2017-10-25 15:07:23 +00:00
|
|
|
#
|
|
|
|
# Compile generic security library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_SECURITY=y
|
|
|
|
|
2018-04-27 13:23:54 +00:00
|
|
|
#
|
|
|
|
# Compile generic compression device library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_COMPRESSDEV=y
|
|
|
|
CONFIG_RTE_COMPRESS_MAX_DEVS=64
|
|
|
|
|
2018-05-04 10:22:14 +00:00
|
|
|
#
|
|
|
|
# Compile compressdev unit test
|
|
|
|
#
|
|
|
|
CONFIG_RTE_COMPRESSDEV_TEST=n
|
|
|
|
|
2018-07-25 17:04:51 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for Octeontx ZIPVF compression device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_OCTEONTX_ZIPVF=y
|
|
|
|
|
2018-05-09 16:14:26 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for ISA-L compression device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_ISAL=n
|
|
|
|
|
2018-07-24 15:05:32 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for ZLIB compression device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_ZLIB=n
|
|
|
|
|
2016-12-06 01:51:46 +00:00
|
|
|
#
|
|
|
|
# Compile generic event device library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_EVENTDEV=y
|
|
|
|
CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
|
|
|
|
CONFIG_RTE_EVENT_MAX_DEVS=16
|
|
|
|
CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
|
2018-04-04 21:51:08 +00:00
|
|
|
CONFIG_RTE_EVENT_TIMER_ADAPTER_NUM_MAX=32
|
2018-07-02 09:11:13 +00:00
|
|
|
CONFIG_RTE_EVENT_ETH_INTR_RING_SIZE=1024
|
2018-05-09 08:17:59 +00:00
|
|
|
CONFIG_RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE=32
|
2018-09-20 17:41:14 +00:00
|
|
|
CONFIG_RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE=32
|
2016-12-06 01:51:46 +00:00
|
|
|
|
2016-11-18 05:45:01 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for skeleton event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV=y
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG=n
|
|
|
|
|
2017-03-30 19:30:31 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for software event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV=y
|
|
|
|
|
2018-09-18 12:45:05 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for distributed software event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DSW_EVENTDEV=y
|
|
|
|
|
2017-03-03 17:27:46 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for octeontx sso event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
|
|
|
|
|
2018-01-09 14:18:50 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for OPDL event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV=y
|
|
|
|
|
2018-01-16 20:43:54 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for NXP DPAA event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV=n
|
|
|
|
|
2018-01-17 11:39:09 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for NXP DPAA2 event device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV=n
|
|
|
|
|
rawdev: introduce raw device library
Each device in DPDK has a type associated with it - ethernet, crypto,
event etc. This patch introduces 'rawdevice' which is a generic
type of device, not currently handled out-of-the-box by DPDK.
A device which can be scanned on an installed bus (pci, fslmc, ...)
or instantiated through devargs, can be interfaced using
standardized APIs just like other standardized devices.
This library introduces an API set which can be plugged on the
northbound side to the application layer, and on the southbound side
to the driver layer.
The APIs of rawdev library exposes some generic operations which can
enable configuration and I/O with the raw devices. Using opaque
data (pointer) as API arguments, library allows a high flexibility
for application and driver implementation.
This patch introduces basic device operations like start, stop, reset,
queue and info support.
Subsequent patches would introduce other operations like buffer
enqueue/dequeue and firmware support.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
2018-01-31 09:13:09 +00:00
|
|
|
#
|
|
|
|
# Compile raw device support
|
|
|
|
# EXPERIMENTAL: API may change without prior notice
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_RAWDEV=y
|
|
|
|
CONFIG_RTE_RAWDEV_MAX_DEVS=10
|
2018-01-31 09:13:15 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_SKELETON_RAWDEV=y
|
rawdev: introduce raw device library
Each device in DPDK has a type associated with it - ethernet, crypto,
event etc. This patch introduces 'rawdevice' which is a generic
type of device, not currently handled out-of-the-box by DPDK.
A device which can be scanned on an installed bus (pci, fslmc, ...)
or instantiated through devargs, can be interfaced using
standardized APIs just like other standardized devices.
This library introduces an API set which can be plugged on the
northbound side to the application layer, and on the southbound side
to the driver layer.
The APIs of rawdev library exposes some generic operations which can
enable configuration and I/O with the raw devices. Using opaque
data (pointer) as API arguments, library allows a high flexibility
for application and driver implementation.
This patch introduces basic device operations like start, stop, reset,
queue and info support.
Subsequent patches would introduce other operations like buffer
enqueue/dequeue and firmware support.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
2018-01-31 09:13:09 +00:00
|
|
|
|
2018-05-04 10:11:26 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for NXP DPAA2 CMDIF raw device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DPAA2_CMDIF_RAWDEV=n
|
|
|
|
|
2018-05-03 16:06:07 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for NXP DPAA2 QDMA raw device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV=n
|
|
|
|
|
2018-05-11 08:31:31 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for Intel FPGA raw device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile librte_ring
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_RING=y
|
2018-01-22 04:41:28 +00:00
|
|
|
CONFIG_RTE_RING_USE_C11_MEM_MODEL=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_mempool
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_MEMPOOL=y
|
|
|
|
CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
|
|
|
|
CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
|
|
|
|
|
2017-03-31 05:35:36 +00:00
|
|
|
#
|
|
|
|
# Compile Mempool drivers
|
|
|
|
#
|
2018-04-26 10:59:19 +00:00
|
|
|
CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
|
|
|
|
CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64
|
2017-03-31 05:35:36 +00:00
|
|
|
CONFIG_RTE_DRIVER_MEMPOOL_RING=y
|
2017-03-31 05:35:37 +00:00
|
|
|
CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
|
2017-03-31 05:35:36 +00:00
|
|
|
|
2017-10-08 12:40:03 +00:00
|
|
|
#
|
|
|
|
# Compile PMD for octeontx fpa mempool device
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile librte_mbuf
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_MBUF=y
|
|
|
|
CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
|
2016-06-22 09:27:29 +00:00
|
|
|
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
|
|
|
|
CONFIG_RTE_PKTMBUF_HEADROOM=128
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_timer
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_TIMER=y
|
|
|
|
CONFIG_RTE_LIBRTE_TIMER_DEBUG=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_cfgfile
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_CFGFILE=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_cmdline
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_CMDLINE=y
|
|
|
|
CONFIG_RTE_LIBRTE_CMDLINE_DEBUG=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_hash
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_HASH=y
|
|
|
|
CONFIG_RTE_LIBRTE_HASH_DEBUG=n
|
|
|
|
|
2017-01-17 22:23:51 +00:00
|
|
|
#
|
|
|
|
# Compile librte_efd
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_EFD=y
|
|
|
|
|
2017-10-04 03:12:19 +00:00
|
|
|
#
|
|
|
|
# Compile librte_member
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_MEMBER=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile librte_jobstats
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_JOBSTATS=y
|
|
|
|
|
2017-03-30 21:00:57 +00:00
|
|
|
#
|
|
|
|
# Compile the device metrics library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_METRICS=y
|
|
|
|
|
2017-03-30 21:00:59 +00:00
|
|
|
#
|
|
|
|
# Compile the bitrate statistics library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_BITRATE=y
|
|
|
|
|
2017-03-30 21:01:01 +00:00
|
|
|
#
|
|
|
|
# Compile the latency statistics library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_LATENCY_STATS=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile librte_lpm
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_LPM=y
|
|
|
|
CONFIG_RTE_LIBRTE_LPM_DEBUG=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_acl
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_ACL=y
|
|
|
|
CONFIG_RTE_LIBRTE_ACL_DEBUG=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_power
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_POWER=n
|
|
|
|
CONFIG_RTE_LIBRTE_POWER_DEBUG=n
|
|
|
|
CONFIG_RTE_MAX_LCORE_FREQS=64
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_net
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_NET=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_ip_frag
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_IP_FRAG=y
|
|
|
|
CONFIG_RTE_LIBRTE_IP_FRAG_DEBUG=n
|
|
|
|
CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG=4
|
|
|
|
CONFIG_RTE_LIBRTE_IP_FRAG_TBL_STAT=n
|
|
|
|
|
lib/gro: add Generic Receive Offload API framework
Generic Receive Offload (GRO) is a widely used SW-based offloading
technique to reduce per-packet processing overhead. It gains
performance by reassembling small packets into large ones. This
patchset is to support GRO in DPDK. To support GRO, this patch
implements a GRO API framework.
To enable more flexibility to applications, DPDK GRO is implemented as
a user library. Applications explicitly use the GRO library to merge
small packets into large ones. DPDK GRO provides two reassembly modes.
One is called lightweight mode, the other is called heavyweight mode.
If applications want to merge packets in a simple way and the number
of packets is relatively small, they can use the lightweight mode.
If applications need more fine-grained controls, they can choose the
heavyweight mode.
rte_gro_reassemble_burst is the main reassembly API which is used in
lightweight mode and processes N packets at a time. For applications,
performing GRO in lightweight mode is simple. They just need to invoke
rte_gro_reassemble_burst. Applications can get GROed packets as soon as
rte_gro_reassemble_burst returns.
rte_gro_reassemble is the main reassembly API which is used in
heavyweight mode and tries to merge N inputted packets with the packets
in GRO reassembly tables. For applications, performing GRO in heavyweight
mode is relatively complicated. Before performing GRO, applications need
to create a GRO context object, which keeps reassembly tables of
desired GRO types, by rte_gro_ctx_create. Then applications can use
rte_gro_reassemble to merge packets. The GROed packets are in the
reassembly tables of the GRO context object. If applications want to get
them, applications need to manually flush them by flush API.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
2017-07-09 05:46:44 +00:00
|
|
|
#
|
|
|
|
# Compile GRO library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_GRO=y
|
|
|
|
|
gso: add Generic Segmentation Offload API framework
Generic Segmentation Offload (GSO) is a SW technique to split large
packets into small ones. Akin to TSO, GSO enables applications to
operate on large packets, thus reducing per-packet processing overhead.
To enable more flexibility to applications, DPDK GSO is implemented
as a standalone library. Applications explicitly use the GSO library
to segment packets. To segment a packet requires two steps. The first
is to set proper flags to mbuf->ol_flags, where the flags are the same
as that of TSO. The second is to call the segmentation API,
rte_gso_segment(). This patch introduces the GSO API framework to DPDK.
rte_gso_segment() splits an input packet into small ones in each
invocation. The GSO library refers to these small packets generated
by rte_gso_segment() as GSO segments. Each of the newly-created GSO
segments is organized as a two-segment MBUF, where the first segment is a
standard MBUF, which stores a copy of packet header, and the second is an
indirect MBUF which points to a section of data in the input packet.
rte_gso_segment() reduces the refcnt of the input packet by 1. Therefore,
when all GSO segments are freed, the input packet is freed automatically.
Additionally, since each GSO segment has multiple MBUFs (i.e. 2 MBUFs),
the driver of the interface which the GSO segments are sent to should
support to transmit multi-segment packets.
The GSO framework clears the PKT_TX_TCP_SEG flag for both the input
packet, and all produced GSO segments in the event of success, since
segmentation in hardware is no longer required at that point.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2017-10-07 14:56:39 +00:00
|
|
|
#
|
|
|
|
# Compile GSO library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_GSO=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile librte_meter
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_METER=y
|
|
|
|
|
2017-10-24 17:28:00 +00:00
|
|
|
#
|
|
|
|
# Compile librte_classify
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile librte_sched
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_SCHED=y
|
|
|
|
CONFIG_RTE_SCHED_DEBUG=n
|
|
|
|
CONFIG_RTE_SCHED_RED=n
|
|
|
|
CONFIG_RTE_SCHED_COLLECT_STATS=n
|
|
|
|
CONFIG_RTE_SCHED_SUBPORT_TC_OV=n
|
|
|
|
CONFIG_RTE_SCHED_PORT_N_GRINDERS=8
|
|
|
|
CONFIG_RTE_SCHED_VECTOR=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile the distributor library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_DISTRIBUTOR=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile the reorder library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_REORDER=y
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_port
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PORT=y
|
|
|
|
CONFIG_RTE_PORT_STATS_COLLECT=n
|
2016-03-11 17:08:07 +00:00
|
|
|
CONFIG_RTE_PORT_PCAP=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_table
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_TABLE=y
|
|
|
|
CONFIG_RTE_TABLE_STATS_COLLECT=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_pipeline
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PIPELINE=y
|
|
|
|
CONFIG_RTE_PIPELINE_STATS_COLLECT=n
|
|
|
|
|
|
|
|
#
|
|
|
|
# Compile librte_kni
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_KNI=n
|
2017-02-17 13:42:38 +00:00
|
|
|
CONFIG_RTE_LIBRTE_PMD_KNI=n
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_KNI_KMOD=n
|
2017-01-17 18:01:46 +00:00
|
|
|
CONFIG_RTE_KNI_KMOD_ETHTOOL=n
|
2016-03-04 18:11:12 +00:00
|
|
|
CONFIG_RTE_KNI_PREEMPT_DEFAULT=y
|
|
|
|
|
pdump: add new library for packet capture
The librte_pdump library provides a framework for
packet capturing in dpdk. The library provides set of
APIs to initialize the packet capture framework, to
enable or disable the packet capture, and to uninitialize
it.
The librte_pdump library works on a client/server model.
The server is responsible for enabling or disabling the
packet capture and the clients are responsible
for requesting the enabling or disabling of the packet
capture.
Enabling APIs are supported with port, queue, ring and
mempool parameters. Applications should pass on this information
to get the packets from the dpdk ports.
For enabling requests from applications, library creates the client
request containing the mempool, ring, port and queue information and
sends the request to the server. After receiving the request, server
registers the Rx and Tx callbacks for all the port and queues.
After the callbacks registration, registered callbacks will get the
Rx and Tx packets. Packets then will be copied to the new mbufs that
are allocated from the user passed mempool. These new mbufs then will
be enqueued to the application passed ring. Applications need to dequeue
the mbufs from the rings and direct them to the devices like
pcap vdev for viewing the packets outside of the dpdk
using the packet capture tools.
For disabling requests, library creates the client request containing
the port and queue information and sends the request to the server.
After receiving the request, server removes the Rx and Tx callback
for all the port and queues.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-06-15 14:06:22 +00:00
|
|
|
#
|
|
|
|
# Compile the pdump library
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PDUMP=y
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
2016-08-18 08:48:37 +00:00
|
|
|
# Compile vhost user library
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_VHOST=n
|
|
|
|
CONFIG_RTE_LIBRTE_VHOST_NUMA=n
|
|
|
|
CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
|
|
|
|
|
vhost: add driver on top of the library
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2016-03-21 05:45:08 +00:00
|
|
|
#
|
|
|
|
# Compile vhost PMD
|
|
|
|
# To compile, CONFIG_RTE_LIBRTE_VHOST should be enabled.
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_PMD_VHOST=n
|
|
|
|
|
net/ifcvf: add ifcvf vDPA driver
The IFCVF vDPA (vhost data path acceleration) driver provides support for
the Intel FPGA 100G VF (IFCVF). IFCVF's datapath is virtio ring compatible,
it works as a HW vhost backend which can send/receive packets to/from
virtio directly by DMA.
Different VF devices serve different virtio frontends which are in
different VMs, so each VF needs to have its own DMA address translation
service. During the driver probe a new container is created, with this
container vDPA driver can program DMA remapping table with the VM's memory
region information.
Key vDPA driver ops implemented:
- ifcvf_dev_config:
Enable VF data path with virtio information provided by vhost lib,
including IOMMU programming to enable VF DMA to VM's memory, VFIO
interrupt setup to route HW interrupt to virtio driver, create notify
relay thread to translate virtio driver's kick to a MMIO write onto HW,
HW queues configuration.
- ifcvf_dev_close:
Revoke all the setup in ifcvf_dev_config.
Live migration feature is supported by IFCVF and this driver enables
it. For the dirty page logging, VF helps to log for packet buffer write,
driver helps to make the used ring as dirty when device stops.
Because vDPA driver needs to set up MSI-X vector to interrupt the
guest, only vfio-pci is supported currently.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Rosen Xu <rosen.xu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-04-17 07:06:23 +00:00
|
|
|
#
|
2018-06-13 11:32:45 +00:00
|
|
|
# Compile IFC driver
|
net/ifcvf: add ifcvf vDPA driver
The IFCVF vDPA (vhost data path acceleration) driver provides support for
the Intel FPGA 100G VF (IFCVF). IFCVF's datapath is virtio ring compatible,
it works as a HW vhost backend which can send/receive packets to/from
virtio directly by DMA.
Different VF devices serve different virtio frontends which are in
different VMs, so each VF needs to have its own DMA address translation
service. During the driver probe a new container is created, with this
container vDPA driver can program DMA remapping table with the VM's memory
region information.
Key vDPA driver ops implemented:
- ifcvf_dev_config:
Enable VF data path with virtio information provided by vhost lib,
including IOMMU programming to enable VF DMA to VM's memory, VFIO
interrupt setup to route HW interrupt to virtio driver, create notify
relay thread to translate virtio driver's kick to a MMIO write onto HW,
HW queues configuration.
- ifcvf_dev_close:
Revoke all the setup in ifcvf_dev_config.
Live migration feature is supported by IFCVF and this driver enables
it. For the dirty page logging, VF helps to log for packet buffer write,
driver helps to make the used ring as dirty when device stops.
Because vDPA driver needs to set up MSI-X vector to interrupt the
guest, only vfio-pci is supported currently.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Rosen Xu <rosen.xu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-04-17 07:06:23 +00:00
|
|
|
# To compile, CONFIG_RTE_LIBRTE_VHOST and CONFIG_RTE_EAL_VFIO
|
|
|
|
# should be enabled.
|
|
|
|
#
|
2018-06-13 11:32:45 +00:00
|
|
|
CONFIG_RTE_LIBRTE_IFC_PMD=n
|
net/ifcvf: add ifcvf vDPA driver
The IFCVF vDPA (vhost data path acceleration) driver provides support for
the Intel FPGA 100G VF (IFCVF). IFCVF's datapath is virtio ring compatible,
it works as a HW vhost backend which can send/receive packets to/from
virtio directly by DMA.
Different VF devices serve different virtio frontends which are in
different VMs, so each VF needs to have its own DMA address translation
service. During the driver probe a new container is created, with this
container vDPA driver can program DMA remapping table with the VM's memory
region information.
Key vDPA driver ops implemented:
- ifcvf_dev_config:
Enable VF data path with virtio information provided by vhost lib,
including IOMMU programming to enable VF DMA to VM's memory, VFIO
interrupt setup to route HW interrupt to virtio driver, create notify
relay thread to translate virtio driver's kick to a MMIO write onto HW,
HW queues configuration.
- ifcvf_dev_close:
Revoke all the setup in ifcvf_dev_config.
Live migration feature is supported by IFCVF and this driver enables
it. For the dirty page logging, VF helps to log for packet buffer write,
driver helps to make the used ring as dirty when device stops.
Because vDPA driver needs to set up MSI-X vector to interrupt the
guest, only vfio-pci is supported currently.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Rosen Xu <rosen.xu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-04-17 07:06:23 +00:00
|
|
|
|
2018-05-10 10:23:03 +00:00
|
|
|
#
|
|
|
|
# Compile librte_bpf
|
|
|
|
#
|
|
|
|
CONFIG_RTE_LIBRTE_BPF=y
|
2018-05-10 10:23:04 +00:00
|
|
|
# allow load BPF from ELF files (requires libelf)
|
|
|
|
CONFIG_RTE_LIBRTE_BPF_ELF=n
|
2018-05-10 10:23:03 +00:00
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile the test application
|
|
|
|
#
|
|
|
|
CONFIG_RTE_APP_TEST=y
|
2016-06-14 08:51:39 +00:00
|
|
|
CONFIG_RTE_APP_TEST_RESOURCE_TAR=n
|
2016-03-04 18:11:12 +00:00
|
|
|
|
2018-01-12 18:27:29 +00:00
|
|
|
#
|
|
|
|
# Compile the procinfo application
|
|
|
|
#
|
|
|
|
CONFIG_RTE_PROC_INFO=n
|
|
|
|
|
2016-03-04 18:11:12 +00:00
|
|
|
#
|
|
|
|
# Compile the PMD test application
|
|
|
|
#
|
|
|
|
CONFIG_RTE_TEST_PMD=y
|
|
|
|
CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=n
|
|
|
|
CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS=n
|
2017-01-25 16:27:33 +00:00
|
|
|
|
2018-01-11 19:23:21 +00:00
|
|
|
#
|
|
|
|
# Compile the bbdev test application
|
|
|
|
#
|
|
|
|
CONFIG_RTE_TEST_BBDEV=y
|
|
|
|
|
2017-01-25 16:27:33 +00:00
|
|
|
#
|
|
|
|
# Compile the crypto performance application
|
|
|
|
#
|
|
|
|
CONFIG_RTE_APP_CRYPTO_PERF=y
|
2017-07-04 04:52:56 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Compile the eventdev application
|
|
|
|
#
|
|
|
|
CONFIG_RTE_APP_EVENTDEV=y
|