mem: fix potential bad unmap on map failure
Previously, if mmap failed to map page address at requested
address, we were attempting to unmap the wrong address. Fix it
by unmapping our actual mapped address, and jump further to
avoid unmapping memory that is not allocated.
Coverity issue: 272602
Fixes: 582bed1e1d
("mem: support mapping hugepages at runtime")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This commit is contained in:
parent
8ee25c7e81
commit
e27ffec169
@ -466,7 +466,8 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
|
||||
}
|
||||
if (va != addr) {
|
||||
RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__);
|
||||
goto mapped;
|
||||
munmap(va, alloc_sz);
|
||||
goto resized;
|
||||
}
|
||||
|
||||
rte_iova_t iova = rte_mem_virt2iova(addr);
|
||||
|
Loading…
Reference in New Issue
Block a user