Instead of using a hash table to convert physical page addresses to offsets
in the sparse page array, cache the number of bits set for each 4MB chunk of
physical pages. Upon lookup, find the nearest cached population count, then
add/subtract the number of bits from that point to the page's PTE bit.
Then multiply by page size and add to the sparse page map's base offset.
This replaces O(n) worst-case lookup with O(1) (plus a small number of bits
to scan in the bitmap). Also, for a 128GB system, a typical kernel core of
about 8GB will now only require ~4.5MB of RAM for this approach instead of
~48MB as with the hash table.
More concretely, /usr/sbin/crashinfo against the same core improves from a
max RSS of 188MB and wall time of 43.72s (33.25 user 2.94 sys) to 135MB and
9.43s (2.58 user 1.47 sys). Running "thread apply all bt" in kgdb has a
similar RSS improvement, and wall time drops from 4.44s to 1.93s.
Reviewed by: jhb
Sponsored by: Backtrace I/O
In particular,
- avoid dereferencing NULL pointers
- test pointers against NULL, not 0
- test for errout == NULL in the top-level functions (kvm_open, kvm_openfiles,
kvm_open2, etc)
- Replace a realloc and free on failure with reallocf
Found with: devel/cocchinelle
Differential Revision: https://reviews.freebsd.org/D5954
MFC after: 1 week
Reviewed by: jhb
Sponsored by: EMC / Isilon Storage Division
- Add a kvaddr_type to represent kernel virtual addresses instead of
unsigned long.
- Add a struct kvm_nlist which is a stripped down version of struct nlist
that uses kvaddr_t for n_value.
- Add a kvm_native() routine that returns true if an open kvm descriptor
is for a native kernel and memory image.
- Add a kvm_open2() function similar to kvm_openfiles(). It drops the
unused 'swapfile' argument and adds a new function pointer argument for
a symbol resolving function. Native kernels still use _fdnlist() from
libc to resolve symbols if a resolver function is not supplied, but cross
kernels require a resolver.
- Add a kvm_nlist2() function similar to kvm_nlist() except that it uses
struct kvm_nlist instead of struct nlist.
- Add a kvm_read2() function similar to kvm_read() except that it uses
kvaddr_t instead of unsigned long for the kernel virtual address.
- Add a new kvm_arch switch of routines needed by a vmcore backend.
Each backend is responsible for implementing kvm_read2() for a given
vmcore format.
- Use libelf to read headers from ELF kernels and cores (except for
powerpc cores).
- Add internal helper routines for the common page offset hash table used
by the minidump backends.
- Port all of the existing kvm backends to implement a kvm_arch switch and
to be cross-friendly by using private constants instead of ones that
vary by platform (e.g. PAGE_SIZE). Static assertions are present when
a given backend is compiled natively to ensure the private constants
match the real ones.
- Enable all of the existing vmcore backends on all platforms. This means
that libkvm on any platform should be able to perform KVA translation
and read data from a vmcore of any platform.
Tested on: amd64, i386, sparc64 (marius)
Differential Revision: https://reviews.freebsd.org/D3341
- make WARNS=6 clean for archs w/o strict alignment requirments
- add const, ANSIfy, remove unused vars, cast types for comparison
- thanks to differing definitions of VM_MIN_ADDRESS across our archs, we
need to trick the compiler to not complain about signedness. We could
either fix VM_MIN_ADDRESS to always be a simple integer or make the
check conditional on $ARCH.
Closes PRs: kern/42386, kern/83364
Reviewed by: bde
far more convenient for libkvm to work with because of the page table
block at the beginning. As a result, the MD code is smaller.
libkvm will automatically detect old vs mini dumps on i386 and amd64.
libkvm will handle i386 PAE and non-PAE modes. There is a PAE flag in
the i386 minidump header to signal the width of the entries in the
page table block.
Other convenient values are also present, such as kernbase and the direct
map addresses on amd64.