numam-dpdk/config/rte_config.h

122 lines
3.2 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2017 Intel Corporation
build: add infrastructure for meson and ninja builds To build with meson and ninja, we need some initial infrastructure in place. The build files for meson always need to be called "meson.build", and options get placed in meson_options.txt This commit adds a top-level meson.build file, which sets up the global variables for tracking drivers, libraries, etc., and then includes other build files, before finishing by writing the global build configuration header file and a DPDK pkgconfig file at the end, using some of those same globals. From the top level build file, the only include file thus far is for the config folder, which does some other setup of global configuration parameters, including pulling in architecture specific parameters from an architectural subdirectory. A number of configuration build options are provided for the project to tune a number of global variables which will be used later e.g. max numa nodes, max cores, etc. These settings all make their way to the global build config header "rte_build_config.h". There is also a file "rte_config.h", which includes "rte_build_config.h", and this file is meant to hold other build-time values which are present in our current static build configuration but are not normally meant for user-configuration. Ideally, over time, the values placed here should be moved to the individual libraries or drivers which want those values. Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com> Acked-by: Keith Wiles <keith.wiles@intel.com> Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
*/
/**
* @file Header file containing DPDK compilation parameters
*
* Header file containing DPDK compilation parameters. Also include the
* meson-generated header file containing the detected parameters that
* are variable across builds or build environments.
*
* NOTE: This file is only used for meson+ninja builds. For builds done
* using make/gmake, the rte_config.h file is autogenerated from the
* defconfig_* files in the config directory.
*/
#ifndef _RTE_CONFIG_H_
#define _RTE_CONFIG_H_
#include <rte_build_config.h>
/****** library defines ********/
/* EAL defines */
mem: replace memseg with memseg lists Before, we were aggregating multiple pages into one memseg, so the number of memsegs was small. Now, each page gets its own memseg, so the list of memsegs is huge. To accommodate the new memseg list size and to keep the under-the-hood workings sane, the memseg list is now not just a single list, but multiple lists. To be precise, each hugepage size available on the system gets one or more memseg lists, per socket. In order to support dynamic memory allocation, we reserve all memory in advance (unless we're in 32-bit legacy mode, in which case we do not preallocate memory). As in, we do an anonymous mmap() of the entire maximum size of memory per hugepage size, per socket (which is limited to either RTE_MAX_MEMSEG_PER_TYPE pages or RTE_MAX_MEM_MB_PER_TYPE megabytes worth of memory, whichever is the smaller one), split over multiple lists (which are limited to either RTE_MAX_MEMSEG_PER_LIST memsegs or RTE_MAX_MEM_MB_PER_LIST megabytes per list, whichever is the smaller one). There is also a global limit of CONFIG_RTE_MAX_MEM_MB megabytes, which is mainly used for 32-bit targets to limit amounts of preallocated memory, but can be used to place an upper limit on total amount of VA memory that can be allocated by DPDK application. So, for each hugepage size, we get (by default) up to 128G worth of memory, per socket, split into chunks of up to 32G in size. The address space is claimed at the start, in eal_common_memory.c. The actual page allocation code is in eal_memalloc.c (Linux-only), and largely consists of copied EAL memory init code. Pages in the list are also indexed by address. That is, in order to figure out where the page belongs, one can simply look at base address for a memseg list. Similarly, figuring out IOVA address of a memzone is a matter of finding the right memseg list, getting offset and dividing by page size to get the appropriate memseg. This commit also removes rte_eal_dump_physmem_layout() call, according to deprecation notice [1], and removes that deprecation notice as well. On 32-bit targets due to limited VA space, DPDK will no longer spread memory to different sockets like before. Instead, it will (by default) allocate all of the memory on socket where master lcore is. To override this behavior, --socket-mem must be used. The rest of the changes are really ripple effects from the memseg change - heap changes, compile fixes, and rewrites to support fbarray-backed memseg lists. Due to earlier switch to _walk() functions, most of the changes are simple fixes, however some of the _walk() calls were switched to memseg list walk, where it made sense to do so. Additionally, we are also switching locks from flock() to fcntl(). Down the line, we will be introducing single-file segments option, and we cannot use flock() locks to lock parts of the file. Therefore, we will use fcntl() locks for legacy mem as well, in case someone is unfortunate enough to accidentally start legacy mem primary process alongside an already working non-legacy mem-based primary process. [1] http://dpdk.org/dev/patchwork/patch/34002/ Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com> Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 12:30:24 +00:00
#define RTE_MAX_MEMSEG_LISTS 128
#define RTE_MAX_MEMSEG_PER_LIST 8192
#define RTE_MAX_MEM_MB_PER_LIST 32768
#define RTE_MAX_MEMSEG_PER_TYPE 32768
#define RTE_MAX_MEM_MB_PER_TYPE 65536
#define RTE_MAX_MEM_MB 524288
#define RTE_MAX_MEMZONE 2560
#define RTE_MAX_TAILQ 32
#define RTE_LOG_DP_LEVEL RTE_LOG_INFO
#define RTE_BACKTRACE 1
#define RTE_EAL_VFIO 1
#define RTE_MAX_VFIO_CONTAINERS 64
/* bsd module defines */
#define RTE_CONTIGMEM_MAX_NUM_BUFS 64
#define RTE_CONTIGMEM_DEFAULT_NUM_BUFS 1
#define RTE_CONTIGMEM_DEFAULT_BUF_SIZE (512*1024*1024)
/* mempool defines */
#define RTE_MEMPOOL_CACHE_MAX_SIZE 512
/* mbuf defines */
#define RTE_MBUF_DEFAULT_MEMPOOL_OPS "ring_mp_mc"
#define RTE_MBUF_REFCNT_ATOMIC 1
#define RTE_PKTMBUF_HEADROOM 128
/* ether defines */
#define RTE_MAX_ETHPORTS 32
#define RTE_MAX_QUEUES_PER_PORT 1024
#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16
#define RTE_ETHDEV_RXTX_CALLBACKS 1
/* cryptodev defines */
#define RTE_CRYPTO_MAX_DEVS 64
#define RTE_CRYPTODEV_NAME_LEN 64
/* eventdev defines */
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_QUEUES_PER_DEV 64
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
/* rawdev defines */
#define RTE_RAWDEV_MAX_DEVS 10
/* ip_fragmentation defines */
#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
#undef RTE_LIBRTE_IP_FRAG_TBL_STAT
/* rte_power defines */
#define RTE_MAX_LCORE_FREQS 64
/* rte_sched defines */
#undef RTE_SCHED_RED
#undef RTE_SCHED_COLLECT_STATS
#undef RTE_SCHED_SUBPORT_TC_OV
#define RTE_SCHED_PORT_N_GRINDERS 8
#undef RTE_SCHED_VECTOR
/****** driver defines ********/
/*
* Number of sessions to create in the session memory pool
* on a single instance of crypto HW device.
*/
/* QuickAssist device */
#define RTE_QAT_PMD_MAX_NB_SESSIONS 2048
/* virtio crypto defines */
#define RTE_VIRTIO_CRYPTO_PMD_MAX_NB_SESSIONS 1024
#define RTE_MAX_VIRTIO_CRYPTO 32
/* DPAA2_SEC */
#define RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS 2048
/* DPAA_SEC */
#define RTE_DPAA_SEC_PMD_MAX_NB_SESSIONS 2048
/* DPAA SEC max cryptodev devices*/
#define RTE_LIBRTE_DPAA_MAX_CRYPTODEV 4
/* fm10k defines */
#define RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE 1
/* i40e defines */
#define RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC 1
#undef RTE_LIBRTE_I40E_16BYTE_RX_DESC
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF 64
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
/* interval up to 8160 us, aligned to 2 (or default value) */
#define RTE_LIBRTE_I40E_ITR_INTERVAL -1
/* Ring net PMD settings */
#define RTE_PMD_RING_MAX_RX_RINGS 16
#define RTE_PMD_RING_MAX_TX_RINGS 16
build: add infrastructure for meson and ninja builds To build with meson and ninja, we need some initial infrastructure in place. The build files for meson always need to be called "meson.build", and options get placed in meson_options.txt This commit adds a top-level meson.build file, which sets up the global variables for tracking drivers, libraries, etc., and then includes other build files, before finishing by writing the global build configuration header file and a DPDK pkgconfig file at the end, using some of those same globals. From the top level build file, the only include file thus far is for the config folder, which does some other setup of global configuration parameters, including pulling in architecture specific parameters from an architectural subdirectory. A number of configuration build options are provided for the project to tune a number of global variables which will be used later e.g. max numa nodes, max cores, etc. These settings all make their way to the global build config header "rte_build_config.h". There is also a file "rte_config.h", which includes "rte_build_config.h", and this file is meant to hold other build-time values which are present in our current static build configuration but are not normally meant for user-configuration. Ideally, over time, the values placed here should be moved to the individual libraries or drivers which want those values. Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com> Acked-by: Keith Wiles <keith.wiles@intel.com> Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
#endif /* _RTE_CONFIG_H_ */