mem: fix undefined behavior in NUMA-aware mapping

When NUMA-aware hugepages config option is set, we rely on
libnuma to tell the kernel to allocate hugepages on a specific
NUMA node. However, we allocate node mask before we check if
NUMA is available in the first place, which, according to
the manpage [1], causes undefined behaviour.

Fix by only using nodemask when we have NUMA available.

[1] https://linux.die.net/man/3/numa_alloc_onnode

Bugzilla ID: 20
Fixes: 1b72605d2416 ("mem: balanced allocation of hugepages")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Ilya Maximets <i.maximets@samsung.com>
This commit is contained in:
Anatoly Burakov 2018-09-21 10:27:22 +01:00 committed by Thomas Monjalon
parent 64cdfc35aa
commit b1621823ea

View File

@ -264,7 +264,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
int node_id = -1;
int essential_prev = 0;
int oldpolicy;
struct bitmask *oldmask = numa_allocate_nodemask();
struct bitmask *oldmask = NULL;
bool have_numa = true;
unsigned long maxnode = 0;
@ -276,6 +276,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
if (have_numa) {
RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n");
oldmask = numa_allocate_nodemask();
if (get_mempolicy(&oldpolicy, oldmask->maskp,
oldmask->size + 1, 0, 0) < 0) {
RTE_LOG(ERR, EAL,
@ -403,7 +404,8 @@ out:
numa_set_localalloc();
}
}
numa_free_cpumask(oldmask);
if (oldmask != NULL)
numa_free_cpumask(oldmask);
#endif
return i;
}