2017-12-18 15:56:25 +00:00
|
|
|
/* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
* Copyright(c) 2017 Intel Corporation
|
build: add infrastructure for meson and ninja builds
To build with meson and ninja, we need some initial infrastructure in
place. The build files for meson always need to be called "meson.build",
and options get placed in meson_options.txt
This commit adds a top-level meson.build file, which sets up the global
variables for tracking drivers, libraries, etc., and then includes other
build files, before finishing by writing the global build configuration
header file and a DPDK pkgconfig file at the end, using some of those same
globals.
From the top level build file, the only include file thus far is for the
config folder, which does some other setup of global configuration
parameters, including pulling in architecture specific parameters from an
architectural subdirectory. A number of configuration build options are
provided for the project to tune a number of global variables which will be
used later e.g. max numa nodes, max cores, etc. These settings all make
their way to the global build config header "rte_build_config.h". There is
also a file "rte_config.h", which includes "rte_build_config.h", and this
file is meant to hold other build-time values which are present in our
current static build configuration but are not normally meant for
user-configuration. Ideally, over time, the values placed here should be
moved to the individual libraries or drivers which want those values.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @file Header file containing DPDK compilation parameters
|
|
|
|
*
|
|
|
|
* Header file containing DPDK compilation parameters. Also include the
|
|
|
|
* meson-generated header file containing the detected parameters that
|
|
|
|
* are variable across builds or build environments.
|
|
|
|
*/
|
|
|
|
#ifndef _RTE_CONFIG_H_
|
|
|
|
#define _RTE_CONFIG_H_
|
|
|
|
|
|
|
|
#include <rte_build_config.h>
|
|
|
|
|
2019-03-06 16:22:39 +00:00
|
|
|
/* legacy defines */
|
|
|
|
#ifdef RTE_EXEC_ENV_LINUX
|
|
|
|
#define RTE_EXEC_ENV_LINUXAPP 1
|
|
|
|
#endif
|
2019-03-06 16:22:40 +00:00
|
|
|
#ifdef RTE_EXEC_ENV_FREEBSD
|
|
|
|
#define RTE_EXEC_ENV_BSDAPP 1
|
|
|
|
#endif
|
2019-03-06 16:22:39 +00:00
|
|
|
|
2019-03-15 18:20:21 +00:00
|
|
|
/* String that appears before the version number */
|
|
|
|
#define RTE_VER_PREFIX "DPDK"
|
|
|
|
|
2017-11-17 19:37:54 +00:00
|
|
|
/****** library defines ********/
|
|
|
|
|
|
|
|
/* EAL defines */
|
2018-10-02 13:34:41 +00:00
|
|
|
#define RTE_MAX_HEAPS 32
|
mem: replace memseg with memseg lists
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, but multiple lists. To be precise,
each hugepage size available on the system gets one or more memseg
lists, per socket.
In order to support dynamic memory allocation, we reserve all
memory in advance (unless we're in 32-bit legacy mode, in which
case we do not preallocate memory). As in, we do an anonymous
mmap() of the entire maximum size of memory per hugepage size, per
socket (which is limited to either RTE_MAX_MEMSEG_PER_TYPE pages or
RTE_MAX_MEM_MB_PER_TYPE megabytes worth of memory, whichever is the
smaller one), split over multiple lists (which are limited to
either RTE_MAX_MEMSEG_PER_LIST memsegs or RTE_MAX_MEM_MB_PER_LIST
megabytes per list, whichever is the smaller one). There is also
a global limit of CONFIG_RTE_MAX_MEM_MB megabytes, which is mainly
used for 32-bit targets to limit amounts of preallocated memory,
but can be used to place an upper limit on total amount of VA
memory that can be allocated by DPDK application.
So, for each hugepage size, we get (by default) up to 128G worth
of memory, per socket, split into chunks of up to 32G in size.
The address space is claimed at the start, in eal_common_memory.c.
The actual page allocation code is in eal_memalloc.c (Linux-only),
and largely consists of copied EAL memory init code.
Pages in the list are also indexed by address. That is, in order
to figure out where the page belongs, one can simply look at base
address for a memseg list. Similarly, figuring out IOVA address
of a memzone is a matter of finding the right memseg list, getting
offset and dividing by page size to get the appropriate memseg.
This commit also removes rte_eal_dump_physmem_layout() call,
according to deprecation notice [1], and removes that deprecation
notice as well.
On 32-bit targets due to limited VA space, DPDK will no longer
spread memory to different sockets like before. Instead, it will
(by default) allocate all of the memory on socket where master
lcore is. To override this behavior, --socket-mem must be used.
The rest of the changes are really ripple effects from the memseg
change - heap changes, compile fixes, and rewrites to support
fbarray-backed memseg lists. Due to earlier switch to _walk()
functions, most of the changes are simple fixes, however some
of the _walk() calls were switched to memseg list walk, where
it made sense to do so.
Additionally, we are also switching locks from flock() to fcntl().
Down the line, we will be introducing single-file segments option,
and we cannot use flock() locks to lock parts of the file. Therefore,
we will use fcntl() locks for legacy mem as well, in case someone is
unfortunate enough to accidentally start legacy mem primary process
alongside an already working non-legacy mem-based primary process.
[1] http://dpdk.org/dev/patchwork/patch/34002/
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 12:30:24 +00:00
|
|
|
#define RTE_MAX_MEMSEG_LISTS 128
|
|
|
|
#define RTE_MAX_MEMSEG_PER_LIST 8192
|
|
|
|
#define RTE_MAX_MEM_MB_PER_LIST 32768
|
|
|
|
#define RTE_MAX_MEMSEG_PER_TYPE 32768
|
|
|
|
#define RTE_MAX_MEM_MB_PER_TYPE 65536
|
2017-11-17 19:37:54 +00:00
|
|
|
#define RTE_MAX_MEMZONE 2560
|
|
|
|
#define RTE_MAX_TAILQ 32
|
|
|
|
#define RTE_LOG_DP_LEVEL RTE_LOG_INFO
|
|
|
|
#define RTE_BACKTRACE 1
|
2018-04-17 07:06:20 +00:00
|
|
|
#define RTE_MAX_VFIO_CONTAINERS 64
|
2017-11-17 19:37:54 +00:00
|
|
|
|
2017-09-21 14:54:53 +00:00
|
|
|
/* bsd module defines */
|
|
|
|
#define RTE_CONTIGMEM_MAX_NUM_BUFS 64
|
|
|
|
#define RTE_CONTIGMEM_DEFAULT_NUM_BUFS 1
|
|
|
|
#define RTE_CONTIGMEM_DEFAULT_BUF_SIZE (512*1024*1024)
|
|
|
|
|
2017-10-17 12:43:57 +00:00
|
|
|
/* mempool defines */
|
|
|
|
#define RTE_MEMPOOL_CACHE_MAX_SIZE 512
|
|
|
|
|
2017-11-17 19:37:54 +00:00
|
|
|
/* mbuf defines */
|
|
|
|
#define RTE_MBUF_DEFAULT_MEMPOOL_OPS "ring_mp_mc"
|
2017-10-17 12:43:57 +00:00
|
|
|
#define RTE_MBUF_REFCNT_ATOMIC 1
|
|
|
|
#define RTE_PKTMBUF_HEADROOM 128
|
|
|
|
|
|
|
|
/* ether defines */
|
|
|
|
#define RTE_MAX_QUEUES_PER_PORT 1024
|
|
|
|
#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16
|
|
|
|
#define RTE_ETHDEV_RXTX_CALLBACKS 1
|
|
|
|
|
|
|
|
/* cryptodev defines */
|
|
|
|
#define RTE_CRYPTO_MAX_DEVS 64
|
|
|
|
#define RTE_CRYPTODEV_NAME_LEN 64
|
|
|
|
|
2018-04-27 13:23:54 +00:00
|
|
|
/* compressdev defines */
|
|
|
|
#define RTE_COMPRESS_MAX_DEVS 64
|
|
|
|
|
2020-07-06 17:36:48 +00:00
|
|
|
/* regexdev defines */
|
|
|
|
#define RTE_MAX_REGEXDEV_DEVS 32
|
|
|
|
|
2017-10-17 12:43:57 +00:00
|
|
|
/* eventdev defines */
|
|
|
|
#define RTE_EVENT_MAX_DEVS 16
|
|
|
|
#define RTE_EVENT_MAX_QUEUES_PER_DEV 64
|
2018-04-04 21:51:08 +00:00
|
|
|
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
|
2018-07-02 09:11:13 +00:00
|
|
|
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
|
2018-05-09 08:17:59 +00:00
|
|
|
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
|
2018-09-20 17:41:14 +00:00
|
|
|
#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
|
2017-10-17 12:43:57 +00:00
|
|
|
|
2018-04-04 10:12:13 +00:00
|
|
|
/* rawdev defines */
|
2019-04-04 11:50:18 +00:00
|
|
|
#define RTE_RAWDEV_MAX_DEVS 64
|
2018-04-04 10:12:13 +00:00
|
|
|
|
2017-10-17 12:43:57 +00:00
|
|
|
/* ip_fragmentation defines */
|
|
|
|
#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
|
|
|
|
#undef RTE_LIBRTE_IP_FRAG_TBL_STAT
|
|
|
|
|
|
|
|
/* rte_power defines */
|
|
|
|
#define RTE_MAX_LCORE_FREQS 64
|
|
|
|
|
|
|
|
/* rte_sched defines */
|
|
|
|
#undef RTE_SCHED_RED
|
|
|
|
#undef RTE_SCHED_COLLECT_STATS
|
|
|
|
#undef RTE_SCHED_SUBPORT_TC_OV
|
|
|
|
#define RTE_SCHED_PORT_N_GRINDERS 8
|
|
|
|
#undef RTE_SCHED_VECTOR
|
2017-11-17 19:37:54 +00:00
|
|
|
|
2019-09-16 10:08:28 +00:00
|
|
|
/* KNI defines */
|
|
|
|
#define RTE_KNI_PREEMPT_DEFAULT 1
|
|
|
|
|
2020-04-11 14:14:00 +00:00
|
|
|
/* rte_graph defines */
|
|
|
|
#define RTE_GRAPH_BURST_SIZE 256
|
|
|
|
#define RTE_LIBRTE_GRAPH_STATS 1
|
|
|
|
|
2017-08-30 09:00:46 +00:00
|
|
|
/****** driver defines ********/
|
|
|
|
|
2018-03-14 07:56:05 +00:00
|
|
|
/* QuickAssist device */
|
2018-06-13 12:14:16 +00:00
|
|
|
/* Max. number of QuickAssist devices which can be attached */
|
|
|
|
#define RTE_PMD_QAT_MAX_PCI_DEVICES 48
|
2018-07-23 13:06:51 +00:00
|
|
|
#define RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS 16
|
2018-10-26 18:18:30 +00:00
|
|
|
#define RTE_PMD_QAT_COMP_IM_BUFFER_SIZE 65536
|
2017-08-30 09:00:46 +00:00
|
|
|
|
2018-04-17 09:23:17 +00:00
|
|
|
/* virtio crypto defines */
|
|
|
|
#define RTE_MAX_VIRTIO_CRYPTO 32
|
|
|
|
|
2018-03-14 07:56:05 +00:00
|
|
|
/* DPAA SEC max cryptodev devices*/
|
|
|
|
#define RTE_LIBRTE_DPAA_MAX_CRYPTODEV 4
|
|
|
|
|
2017-08-30 11:31:35 +00:00
|
|
|
/* fm10k defines */
|
|
|
|
#define RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE 1
|
|
|
|
|
net/hns3: maximize queue number
The maximum number of queues for hns3 PF and VF driver is 64 based on
hns3 network engine with revision_id equals 0x21. Based on hns3 network
engine with revision_id equals 0x30, the hns3 PF PMD driver can support
up to 1280 queues, and hns3 VF PMD driver can support up to 128 queues.
The following points need to be modified to support maximizing queue
number and maintain better compatibility:
1) Maximizing the number of queues for hns3 PF and VF PMD driver In
current version, VF is not supported when PF is driven by hns3 PMD
driver. If maximum queue numbers allocated to PF PMD driver is less
than total tqps_num allocated to this port, all remaining number of
queues are mapped to VF function, which is unreasonable. So we fix
that all remaining number of queues are mapped to PF function.
Using RTE_LIBRTE_HNS3_MAX_TQP_NUM_PER_PF which comes from
configuration file to limit the queue number allocated to PF device
based on hns3 network engine with revision_id greater than 0x30. And
PF device still keep the maximum 64 queues based on hns3 network
engine with revision_id equals 0x21.
Remove restriction of the macro HNS3_MAX_TQP_NUM_PER_FUNC on the
maximum number of queues in hns3 VF PMD driver and use the value
allocated by hns3 PF kernel netdev driver.
2) According to the queue number allocated to PF device, a variable
array for Rx and Tx queue is dynamically allocated to record the
statistics of Rx and Tx queues during the .dev_init ops
implementation function.
3) Add an extended field in hns3_pf_res_cmd to support the case that
numbers of queue are greater than 1024.
4) Use new base address of Rx or Tx queue if QUEUE_ID of Rx or Tx queue
is greater than 1024.
5) Remove queue id mask and use all bits of actual queue_id as the
queue_id to configure hardware.
6) Currently, 0~9 bits of qset_id in hns3_nq_to_qs_link_cmd used to
record actual qset id and 10 bit as VLD bit are configured to
hardware. So we also need to use 11~15 bits when actual qset_id is
greater than 1024.
7) The number of queue sets based on different network engine are
different. We use it to calculate group number and configure to
hardware in the backpressure configuration.
8) Adding check operations for number of Rx and Tx queue user configured
when mapping queue to tc Rx queue numbers under a single TC must be
less than rss_size_max supported by a single TC. Rx and Tx queue
numbers are allocated to every TC by average. So Rx and Tx queue
numbers must be an integer multiple of 2, or redundant queues are not
available.
9) We can specify which packets enter the queue with a specific queue
number, when creating flow table rules by rte_flow API. Currently,
driver uses 0~9 bits to record the queue_id. So it is necessary to
extend one bit field to record queue_id and configure to hardware, if
the queue_id is greater than 1024.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-09-29 12:01:10 +00:00
|
|
|
/* hns3 defines */
|
|
|
|
#define RTE_LIBRTE_HNS3_MAX_TQP_NUM_PER_PF 256
|
|
|
|
|
2017-08-30 11:31:35 +00:00
|
|
|
/* i40e defines */
|
|
|
|
#define RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC 1
|
|
|
|
#undef RTE_LIBRTE_I40E_16BYTE_RX_DESC
|
|
|
|
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF 64
|
|
|
|
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
|
|
|
|
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
|
|
|
|
|
2017-08-30 10:18:49 +00:00
|
|
|
/* Ring net PMD settings */
|
|
|
|
#define RTE_PMD_RING_MAX_RX_RINGS 16
|
|
|
|
#define RTE_PMD_RING_MAX_TX_RINGS 16
|
|
|
|
|
2018-09-18 14:58:18 +00:00
|
|
|
/* QEDE PMD defines */
|
|
|
|
#define RTE_LIBRTE_QEDE_FW ""
|
|
|
|
|
build: add infrastructure for meson and ninja builds
To build with meson and ninja, we need some initial infrastructure in
place. The build files for meson always need to be called "meson.build",
and options get placed in meson_options.txt
This commit adds a top-level meson.build file, which sets up the global
variables for tracking drivers, libraries, etc., and then includes other
build files, before finishing by writing the global build configuration
header file and a DPDK pkgconfig file at the end, using some of those same
globals.
From the top level build file, the only include file thus far is for the
config folder, which does some other setup of global configuration
parameters, including pulling in architecture specific parameters from an
architectural subdirectory. A number of configuration build options are
provided for the project to tune a number of global variables which will be
used later e.g. max numa nodes, max cores, etc. These settings all make
their way to the global build config header "rte_build_config.h". There is
also a file "rte_config.h", which includes "rte_build_config.h", and this
file is meant to hold other build-time values which are present in our
current static build configuration but are not normally meant for
user-configuration. Ideally, over time, the values placed here should be
moved to the individual libraries or drivers which want those values.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
|
|
|
#endif /* _RTE_CONFIG_H_ */
|