Mainly focus on files that use BSD 2-Clause license, however the tool I
was using mis-identified many licenses so this was mostly a manual - error
prone - task.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
This API allows callers to enumerate all known pages, including any
direct map & kernel map virtual addresses, physical addresses, size,
offset into the core, & protection configured.
For architectures that support direct map addresses, also generate pages
for any direct map only addresses that are not associated with kernel
map addresses.
Fix page size portability issue left behind from previous kvm page table
lookup interface.
Reviewed by: jhb
Sponsored by: Backtrace I/O
Differential Revision: https://reviews.freebsd.org/D12279
- Fix -Wunused warnings with *_native detection handlers by marking `kd`
__unused, except with arm/mips, where a slightly more complicated scheme
is required to handle the native case vs the non-native case.
- Fix -Wmissing-variable-declarations warnings by marking struct kvm_arch
objects static.
Differential Revision: D10071
MFC after: 1 week
Reviewed by: vangyzen
Tested with: WIP test code (D10024) // kgdb7121 (i386 crash/kernel on amd64)
Sponsored by: Dell EMC Isilon
Instead of using a hash table to convert physical page addresses to offsets
in the sparse page array, cache the number of bits set for each 4MB chunk of
physical pages. Upon lookup, find the nearest cached population count, then
add/subtract the number of bits from that point to the page's PTE bit.
Then multiply by page size and add to the sparse page map's base offset.
This replaces O(n) worst-case lookup with O(1) (plus a small number of bits
to scan in the bitmap). Also, for a 128GB system, a typical kernel core of
about 8GB will now only require ~4.5MB of RAM for this approach instead of
~48MB as with the hash table.
More concretely, /usr/sbin/crashinfo against the same core improves from a
max RSS of 188MB and wall time of 43.72s (33.25 user 2.94 sys) to 135MB and
9.43s (2.58 user 1.47 sys). Running "thread apply all bt" in kgdb has a
similar RSS improvement, and wall time drops from 4.44s to 1.93s.
Reviewed by: jhb
Sponsored by: Backtrace I/O
In particular,
- avoid dereferencing NULL pointers
- test pointers against NULL, not 0
- test for errout == NULL in the top-level functions (kvm_open, kvm_openfiles,
kvm_open2, etc)
- Replace a realloc and free on failure with reallocf
Found with: devel/cocchinelle
Differential Revision: https://reviews.freebsd.org/D5954
MFC after: 1 week
Reviewed by: jhb
Sponsored by: EMC / Isilon Storage Division
- Add a kvaddr_type to represent kernel virtual addresses instead of
unsigned long.
- Add a struct kvm_nlist which is a stripped down version of struct nlist
that uses kvaddr_t for n_value.
- Add a kvm_native() routine that returns true if an open kvm descriptor
is for a native kernel and memory image.
- Add a kvm_open2() function similar to kvm_openfiles(). It drops the
unused 'swapfile' argument and adds a new function pointer argument for
a symbol resolving function. Native kernels still use _fdnlist() from
libc to resolve symbols if a resolver function is not supplied, but cross
kernels require a resolver.
- Add a kvm_nlist2() function similar to kvm_nlist() except that it uses
struct kvm_nlist instead of struct nlist.
- Add a kvm_read2() function similar to kvm_read() except that it uses
kvaddr_t instead of unsigned long for the kernel virtual address.
- Add a new kvm_arch switch of routines needed by a vmcore backend.
Each backend is responsible for implementing kvm_read2() for a given
vmcore format.
- Use libelf to read headers from ELF kernels and cores (except for
powerpc cores).
- Add internal helper routines for the common page offset hash table used
by the minidump backends.
- Port all of the existing kvm backends to implement a kvm_arch switch and
to be cross-friendly by using private constants instead of ones that
vary by platform (e.g. PAGE_SIZE). Static assertions are present when
a given backend is compiled natively to ensure the private constants
match the real ones.
- Enable all of the existing vmcore backends on all platforms. This means
that libkvm on any platform should be able to perform KVA translation
and read data from a vmcore of any platform.
Tested on: amd64, i386, sparc64 (marius)
Differential Revision: https://reviews.freebsd.org/D3341
- make WARNS=6 clean for archs w/o strict alignment requirments
- add const, ANSIfy, remove unused vars, cast types for comparison
- thanks to differing definitions of VM_MIN_ADDRESS across our archs, we
need to trick the compiler to not complain about signedness. We could
either fix VM_MIN_ADDRESS to always be a simple integer or make the
check conditional on $ARCH.
Closes PRs: kern/42386, kern/83364
Reviewed by: bde