Commit Graph

11 Commits

Author SHA1 Message Date
Jeff Roberson
91e31c3c08 Consistently use busy and vm_page_valid() rather than touching page bits
directly.  This improves API compliance, asserts, etc.

Reviewed by:	kib, markj
Differential Revision:	https://reviews.freebsd.org/D23283
2020-01-23 04:54:49 +00:00
Mark Johnston
fee2a2fa39 Change synchonization rules for vm_page reference counting.
There are several mechanisms by which a vm_page reference is held,
preventing the page from being freed back to the page allocator.  In
particular, holding the page's object lock is sufficient to prevent the
page from being freed; holding the busy lock or a wiring is sufficent as
well.  These references are protected by the page lock, which must
therefore be acquired for many per-page operations.  This results in
false sharing since the page locks are external to the vm_page
structures themselves and each lock protects multiple structures.

Transition to using an atomically updated per-page reference counter.
The object's reference is counted using a flag bit in the counter.  A
second flag bit is used to atomically block new references via
pmap_extract_and_hold() while removing managed mappings of a page.
Thus, the reference count of a page is guaranteed not to increase if the
page is unbusied, unmapped, and the object's write lock is held.  As
a consequence of this, the page lock no longer protects a page's
identity; operations which move pages between objects are now
synchronized solely by the objects' locks.

The vm_page_wire() and vm_page_unwire() KPIs are changed.  The former
requires that either the object lock or the busy lock is held.  The
latter no longer has a return value and may free the page if it releases
the last reference to that page.  vm_page_unwire_noq() behaves the same
as before; the caller is responsible for checking its return value and
freeing or enqueuing the page as appropriate.  vm_page_wire_mapped() is
introduced for use in pmap_extract_and_hold().  It fails if the page is
concurrently being unmapped, typically triggering a fallback to the
fault handler.  vm_page_wire() no longer requires the page lock and
vm_page_unwire() now internally acquires the page lock when releasing
the last wiring of a page (since the page lock still protects a page's
queue state).  In particular, synchronization details are no longer
leaked into the caller.

The change excises the page lock from several frequently executed code
paths.  In particular, vm_object_terminate() no longer bounces between
page locks as it releases an object's pages, and direct I/O and
sendfile(SF_NOCACHE) completions no longer require the page lock.  In
these latter cases we now get linear scalability in the common scenario
where different threads are operating on different files.

__FreeBSD_version is bumped.  The DRM ports have been updated to
accomodate the KPI changes.

Reviewed by:	jeff (earlier version)
Tested by:	gallatin (earlier version), pho
Sponsored by:	Netflix
Differential Revision:	https://reviews.freebsd.org/D20486
2019-09-09 21:32:42 +00:00
Mark Johnston
a1fa04c067 kcov depends on eventhandler.h.
MFC after:	3 days
2019-05-20 19:14:07 +00:00
Andrew Turner
feb2cc805f Check the index hasn't changed after writing the cmp entry.
If an interrupt fires while writing the cmp entry we may have a partial
entry. Work around this by using atomic_cmpset to set the new index. If it
fails we need to set the previous index value and try again as the entry
may be in an inconsistent state.

This fixes messages similar to the following from syzkaller:
bad comp 224 type 2163727253

Reviewed by:	tuexen
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D19287
2019-02-25 13:15:34 +00:00
Andrew Turner
bdffe3b5bf Allow the kcov buffer to be mmaped multiple times.
After r344391 this restriction is no longer needed.

Sponsored by:	DARPA, AFRL
2019-02-21 10:11:15 +00:00
Andrew Turner
01ffedf593 Unwire the kcov buffer when freeing the info struct.
Without this the physical memory will not be returned to the kernel.

While here call vm_object_reference on the object when mmapping the buffer.
This removed the need for buggy tracking of if it has been mapped or not.

This fixes issues where kcov could use all the system memory.

Reported by:	tuexen
Reviewed by:	kib
Sponsored by:	DARPA, AFTL
Differential Revision:	https://reviews.freebsd.org/D19252
2019-02-20 22:41:14 +00:00
Andrew Turner
a759a0a001 Call pmap_qenter for each page when creating the kcov buffer.
This removes the need to allocate a buffer to hold the vm_page_t objects
at the cost of extra IPIs on some architectures.

Reviewed by:	kib
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D19252
2019-02-20 22:32:28 +00:00
Andrew Turner
72b66398fa Create a common function to handle freeing the kcov info struct.
Both places that may free the kcov info struct are identical. Create a new
common function to hold the code.

Sponsored by:	DARPA, AFRL
2019-02-19 17:03:34 +00:00
Andrew Turner
c50c26aa07 Fix the spelling of cov_unregister_pc.
When unregistering kcov from the coverage interface we should use the
unregister function, not the register function.

Sponsored by:	DARPA, AFRL
2019-02-08 16:18:17 +00:00
Andrew Turner
524553f56d Extract the coverage sanitizer KPI to a new file.
This will allow multiple consumers of the coverage data to be compiled
into the kernel together. The only requirement is only one can be
registered at a given point in time, however it is expected they will
only register when the coverage data is needed.

A new kernel conflig option COVERAGE is added. This will allow kcov to
become a module that can be loaded as needed, or compiled into the
kernel.

While here clean up the #include style a little.

Reviewed by:	kib
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D18955
2019-01-29 11:04:17 +00:00
Andrew Turner
b3c0d957a2 Add support for the Clang Coverage Sanitizer in the kernel (KCOV).
When building with KCOV enabled the compiler will insert function calls
to probes allowing us to trace the execution of the kernel from userspace.
These probes are on function entry (trace-pc) and on comparison operations
(trace-cmp).

Userspace can enable the use of these probes on a single kernel thread with
an ioctl interface. It can allocate space for the probe with KIOSETBUFSIZE,
then mmap the allocated buffer and enable tracing with KIOENABLE, with the
trace mode being passed in as the int argument. When complete KIODISABLE
is used to disable tracing.

The first item in the buffer is the number of trace event that have
happened. Userspace can write 0 to this to reset the tracing, and is
expected to do so on first use.

The format of the buffer depends on the trace mode. When in PC tracing just
the return address of the probe is stored. Under comparison tracing the
comparison type, the two arguments, and the return address are traced. The
former method uses on entry per trace event, while the later uses 4. As
such they are incompatible so only a single mode may be enabled.

KCOV is expected to help fuzzing the kernel, and while in development has
already found a number of issues. It is required for the syzkaller system
call fuzzer [1]. Other kernel fuzzers could also make use of it, either
with the current interface, or by extending it with new modes.

A man page is currently being worked on and is expected to be committed
soon, however having the code in the kernel now is useful for other
developers to use.

[1] https://github.com/google/syzkaller

Submitted by:	Mitchell Horne <mhorne063@gmail.com> (Earlier version)
Reviewed by:	kib
Testing by:	tuexen
Sponsored by:	DARPA, AFRL
Sponsored by:	The FreeBSD Foundation (Mitchell Horne)
Differential Revision:	https://reviews.freebsd.org/D14599
2019-01-12 11:21:28 +00:00