env_dpdk: tell DPDK to not free dynamically allocated memory

This keeps us from having to deal with ALLOC and FREE events
for mismatching regions - which necessitated splitting new
regions into individual pages.  This caused all kinds of
problems with NVMe-oF - for example, buffers that spanned
memory regions, or bumping up against MR limits on RDMA
NICs.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I18dcdae148436b55d4481bb9fb8799f4832c7de1

Reviewed-on: https://review.gerrithub.io/434895
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This commit is contained in:
Jim Harris 2018-11-17 05:49:21 -07:00
parent 7a39a68c4f
commit 9cec99b84b

View File

@ -653,14 +653,18 @@ memory_hotplug_cb(enum rte_mem_event event_type,
const void *addr, size_t len, void *arg)
{
if (event_type == RTE_MEM_EVENT_ALLOC) {
spdk_mem_register((void *)addr, len);
/* Now mark each segment so that DPDK won't later free it.
* This ensures we don't have to deal with the memory
* getting freed in different units than it was allocated.
*/
while (len > 0) {
struct rte_memseg *seg;
seg = rte_mem_virt2memseg(addr, NULL);
assert(seg != NULL);
assert(len >= seg->hugepage_sz);
spdk_mem_register((void *)seg->addr, seg->hugepage_sz);
seg->flags |= RTE_MEMSEG_FLAG_DO_NOT_FREE;
addr = (void *)((uintptr_t)addr + seg->hugepage_sz);
len -= seg->hugepage_sz;
}