Revert r240317 to prevent leaking pmap entries

Subsequent to r240317, kmem_free() was replaced with kva_free() (r254025).
kva_free() releases the KVA allocation for the mapped region, but no longer
clears the pmap (pagetable) entries.

An affected pmap_unmapdev operation would leave the still-pmap'd VA space
free for allocation by other KVA consumers.  However, this bug easily
avoided notice for ~7 years because most devices (1) never call
pmap_unmapdev and (2) on amd64, mostly fit within the DMAP and do not need
KVA allocations.  Other affected arch are less popular: i386, MIPS, and
PowerPC.  Arm64, arm32, and riscv are not affected.

Reported by:	Don Morris <dgmorris AT earthlink.net>
Submitted by:	Don Morris (amd64 part)
Reviewed by:	kib, markj, Don (!amd64 parts)
MFC after:	I don't intend to, but you might want to
Sponsored by:	Dell Isilon
Differential Revision:	https://reviews.freebsd.org/D25689
This commit is contained in:
cem 2020-07-16 23:29:26 +00:00
parent 208e252f33
commit 1f5c69828c
7 changed files with 13 additions and 3 deletions

View File

@ -8279,8 +8279,10 @@ pmap_unmapdev(vm_offset_t va, vm_size_t size)
return;
}
}
if (pmap_initialized)
if (pmap_initialized) {
pmap_qremove(va, atop(size));
kva_free(va, size);
}
}
/*

View File

@ -5538,8 +5538,10 @@ __CONCAT(PMTYPE, unmapdev)(vm_offset_t va, vm_size_t size)
return;
}
}
if (pmap_initialized)
if (pmap_initialized) {
pmap_qremove(va, atop(size));
kva_free(va, size);
}
}
/*

View File

@ -3264,6 +3264,7 @@ pmap_unmapdev(vm_offset_t va, vm_size_t size)
base = trunc_page(va);
offset = va & PAGE_MASK;
size = roundup(size + offset, PAGE_SIZE);
pmap_qremove(base, atop(size));
kva_free(base, size);
#endif
}

View File

@ -2673,6 +2673,7 @@ moea_unmapdev(vm_offset_t va, vm_size_t size)
base = trunc_page(va);
offset = va & PAGE_MASK;
size = roundup(offset + size, PAGE_SIZE);
moea_qremove(base, atop(size));
kva_free(base, size);
}
}

View File

@ -2869,6 +2869,7 @@ moea64_unmapdev(vm_offset_t va, vm_size_t size)
offset = va & PAGE_MASK;
size = roundup2(offset + size, PAGE_SIZE);
moea64_qremove(base, atop(size));
kva_free(base, size);
}

View File

@ -5846,8 +5846,10 @@ mmu_radix_unmapdev(vm_offset_t va, vm_size_t size)
size = round_page(offset + size);
va = trunc_page(va);
if (pmap_initialized)
if (pmap_initialized) {
mmu_radix_qremove(va, atop(size));
kva_free(va, size);
}
}
static __inline void

View File

@ -2322,6 +2322,7 @@ mmu_booke_unmapdev(vm_offset_t va, vm_size_t size)
base = trunc_page(va);
offset = va & PAGE_MASK;
size = roundup(offset + size, PAGE_SIZE);
mmu_booke_qremove(base, atop(size));
kva_free(base, size);
}
#endif