mem: do not unmap overlapping region on mmap failure

This isn't documented in the manuals, but a failed
mmap(..., MAP_FIXED) may still unmap overlapping
regions. In such case, we need to remap these regions
back into our address space to ensure mem contiguity.
We do it unconditionally now on mmap failure just to
be safe.

Verified on Linux 4.9.0-4-amd64. I was getting
ENOMEM when trying to map hugetlbfs with no space
left, and the previous anonymous mapping was still
being removed.

Fixes: 582bed1e1d ("mem: support mapping hugepages at runtime")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This commit is contained in:
Dariusz Stojaczyk 2018-06-01 14:59:20 +02:00 committed by Thomas Monjalon
parent 637175ab95
commit 0762c438b8

View File

@ -527,7 +527,10 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
if (va == MAP_FAILED) {
RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__,
strerror(errno));
goto resized;
/* mmap failed, but the previous region might have been
* unmapped anyway. try to remap it
*/
goto unmapped;
}
if (va != addr) {
RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__);
@ -588,6 +591,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
mapped:
munmap(addr, alloc_sz);
unmapped:
flags = MAP_FIXED;
#ifdef RTE_ARCH_PPC_64
flags |= MAP_HUGETLB;