ffdeef3234
Instead of using a hash table to convert physical page addresses to offsets in the sparse page array, cache the number of bits set for each 4MB chunk of physical pages. Upon lookup, find the nearest cached population count, then add/subtract the number of bits from that point to the page's PTE bit. Then multiply by page size and add to the sparse page map's base offset. This replaces O(n) worst-case lookup with O(1) (plus a small number of bits to scan in the bitmap). Also, for a 128GB system, a typical kernel core of about 8GB will now only require ~4.5MB of RAM for this approach instead of ~48MB as with the hash table. More concretely, /usr/sbin/crashinfo against the same core improves from a max RSS of 188MB and wall time of 43.72s (33.25 user 2.94 sys) to 135MB and 9.43s (2.58 user 1.47 sys). Running "thread apply all bt" in kgdb has a similar RSS improvement, and wall time drops from 4.44s to 1.93s. Reviewed by: jhb Sponsored by: Backtrace I/O |
||
---|---|---|
.. | ||
kvm_aarch64.h | ||
kvm_amd64.c | ||
kvm_amd64.h | ||
kvm_arm.c | ||
kvm_arm.h | ||
kvm_cptime.c | ||
kvm_getcptime.3 | ||
kvm_geterr.3 | ||
kvm_getloadavg.3 | ||
kvm_getloadavg.c | ||
kvm_getpcpu.3 | ||
kvm_getprocs.3 | ||
kvm_getswapinfo.3 | ||
kvm_getswapinfo.c | ||
kvm_i386.c | ||
kvm_i386.h | ||
kvm_minidump_aarch64.c | ||
kvm_minidump_amd64.c | ||
kvm_minidump_arm.c | ||
kvm_minidump_i386.c | ||
kvm_minidump_mips.c | ||
kvm_mips.h | ||
kvm_native.3 | ||
kvm_nlist.3 | ||
kvm_open.3 | ||
kvm_pcpu.c | ||
kvm_powerpc64.c | ||
kvm_powerpc.c | ||
kvm_private.c | ||
kvm_private.h | ||
kvm_proc.c | ||
kvm_read.3 | ||
kvm_sparc64.c | ||
kvm_sparc64.h | ||
kvm_vnet.c | ||
kvm.3 | ||
kvm.c | ||
kvm.h | ||
Makefile | ||
Makefile.depend |