numam-dpdk/config/common_linuxapp

43 lines
1.1 KiB
Plaintext
Raw Normal View History

# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2016 Intel Corporation
#include "common_base"
CONFIG_RTE_EXEC_ENV="linuxapp"
CONFIG_RTE_EXEC_ENV_LINUXAPP=y
mem: balanced allocation of hugepages Currently EAL allocates hugepages one by one not paying attention from which NUMA node allocation was done. Such behaviour leads to allocation failure if number of available hugepages for application limited by cgroups or hugetlbfs and memory requested not only from the first socket. Example: # 90 x 1GB hugepages availavle in a system cgcreate -g hugetlb:/test # Limit to 32GB of hugepages cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test # Request 4GB from each of 2 sockets cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ... EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB EAL: 32 not 90 hugepages of size 1024 MB allocated EAL: Not enough memory available on socket 1! Requested: 4096MB, available: 0MB PANIC in rte_eal_init(): Cannot init memory This happens beacause all allocated pages are on socket 0. Fix this issue by setting mempolicy MPOL_PREFERRED for each hugepage to one of requested nodes using following schema: 1) Allocate essential hugepages: 1.1) Allocate as many hugepages from numa N to only fit requested memory for this numa. 1.2) repeat 1.1 for all numa nodes. 2) Try to map all remaining free hugepages in a round-robin fashion. 3) Sort pages and choose the most suitable. In this case all essential memory will be allocated and all remaining pages will be fairly distributed between all requested nodes. New config option RTE_EAL_NUMA_AWARE_HUGEPAGES introduced and enabled by default for linuxapp except armv7 and dpaa2. Enabling of this option adds libnuma as a dependency for EAL. Fixes: 77988fc08dc5 ("mem: fix allocating all free hugepages") Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com> Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
2017-06-29 05:59:19 +00:00
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=y
CONFIG_RTE_EAL_IGB_UIO=y
CONFIG_RTE_EAL_VFIO=y
CONFIG_RTE_KNI_KMOD=y
CONFIG_RTE_LIBRTE_KNI=y
CONFIG_RTE_LIBRTE_PMD_KNI=y
CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_NUMA=y
CONFIG_RTE_LIBRTE_PMD_VHOST=y
net/ifcvf: add ifcvf vDPA driver The IFCVF vDPA (vhost data path acceleration) driver provides support for the Intel FPGA 100G VF (IFCVF). IFCVF's datapath is virtio ring compatible, it works as a HW vhost backend which can send/receive packets to/from virtio directly by DMA. Different VF devices serve different virtio frontends which are in different VMs, so each VF needs to have its own DMA address translation service. During the driver probe a new container is created, with this container vDPA driver can program DMA remapping table with the VM's memory region information. Key vDPA driver ops implemented: - ifcvf_dev_config: Enable VF data path with virtio information provided by vhost lib, including IOMMU programming to enable VF DMA to VM's memory, VFIO interrupt setup to route HW interrupt to virtio driver, create notify relay thread to translate virtio driver's kick to a MMIO write onto HW, HW queues configuration. - ifcvf_dev_close: Revoke all the setup in ifcvf_dev_config. Live migration feature is supported by IFCVF and this driver enables it. For the dirty page logging, VF helps to log for packet buffer write, driver helps to make the used ring as dirty when device stops. Because vDPA driver needs to set up MSI-X vector to interrupt the guest, only vfio-pci is supported currently. Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> Signed-off-by: Rosen Xu <rosen.xu@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-04-17 07:06:23 +00:00
CONFIG_RTE_LIBRTE_IFCVF_VDPA_PMD=y
CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
CONFIG_RTE_LIBRTE_PMD_TAP=y
CONFIG_RTE_LIBRTE_AVP_PMD=y
CONFIG_RTE_LIBRTE_VDEV_NETVSC_PMD=y
CONFIG_RTE_LIBRTE_NFP_PMD=y
CONFIG_RTE_LIBRTE_POWER=y
CONFIG_RTE_VIRTIO_USER=y
CONFIG_RTE_PROC_INFO=y
# NXP DPAA BUS and drivers
CONFIG_RTE_LIBRTE_DPAA_BUS=y
CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=y
CONFIG_RTE_LIBRTE_DPAA_PMD=y
CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV=y
CONFIG_RTE_LIBRTE_PMD_DPAA_SEC=y
# NXP FSLMC BUS and DPAA2 drivers
CONFIG_RTE_LIBRTE_FSLMC_BUS=y
CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL=y
CONFIG_RTE_LIBRTE_DPAA2_PMD=y
CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV=y
CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC=y
CONFIG_RTE_LIBRTE_PMD_DPAA2_CMDIF_RAWDEV=y
CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV=y