Convert vm_fault_hold()'s Boolean variables that are only used
internally to "bool". Add a comment describing why the one
remaining "boolean_t" was not converted.
Reviewed by: kib
MFC after: 8 days
pager error, leave the page to the wire owner. E.g. the page might be
a part of the invalidated buffer.
Reported and tested by: pho
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
Differential revision: https://reviews.freebsd.org/D8197
Suppose that we have an exclusively busy page, and a thread which can
accept shared-busy page. In this case, typical code waiting for the
page xbusy state to pass is
again:
VM_OBJECT_WLOCK(object);
...
if (vm_page_xbusied(m)) {
vm_page_lock(m);
VM_OBJECT_WUNLOCK(object); <---1
vm_page_busy_sleep(p, "vmopax");
goto again;
}
Suppose that the xbusy state owner locked the object, unbusied the
page and unlocked the object after we are at the line [1], but before we
executed the load of the busy_lock word in vm_page_busy_sleep(). If it
happens that there is still no waiters recorded for the busy state,
the xbusy owner did not acquired the page lock, so it proceeded.
More, suppose that some other thread happen to share-busy the page
after xbusy state was relinquished but before the m->busy_lock is read
in vm_page_busy_sleep(). Again, that thread only needs vm_object lock
to proceed. Then, vm_page_busy_sleep() reads busy_lock value equal to
the VPB_SHARERS_WORD(1).
In this case, all tests in vm_page_busy_sleep(9) pass and we are going
to sleep, despite the page being share-busied.
Update check for m->busy_lock == VPB_UNBUSIED in vm_page_busy_sleep(9)
to also accept shared-busy state if we only wait for the xbusy state to
pass.
Merge sequential if()s with the same 'then' clause in
vm_page_busy_sleep().
Note that the current code does not share-busy pages from parallel
threads, the only way to have more that one sbusy owner is right now
is to recurse.
Reported and tested by: pho (previous version)
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D8196
waiters. Otherwise, owners of the shared-busy state are left blocked
and might get into a deadlock.
Note that the vm_page_busy_downgrade() function is not used in the
tree right now.
Reported and tested by: pho (previous version)
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D8195
by vm_pageout_scan() local to vm_pageout_worker(). There is no reason
to store the pass in the NUMA domain structure.
Reviewed by: kib
MFC after: 3 weeks
target was met.
Previously, vm_pageout_worker() itself checked the length of the free page
queues to determine whether vm_pageout_scan(pass >= 1)'s inactive queue scan
freed enough pages to meet the free page target. Specifically,
vm_pageout_worker() used vm_paging_needed(). The trouble with
vm_paging_needed() is that it compares the length of the free page queues to
the wakeup threshold for the page daemon, which is much lower than the free
page target. Consequently, vm_pageout_worker() could conclude that the
inactive queue scan succeeded in meeting its free page target when in fact
it did not; and rather than immediately triggering an all-out laundering
pass over the inactive queue, vm_pageout_worker() would go back to sleep
waiting for the free page count to fall below the page daemon wakeup
threshold again, at which point it will perform another limited (pass == 1)
scan over the inactive queue.
Changing vm_pageout_worker() to use vm_page_count_target() instead of
vm_paging_needed() won't work because any page allocations that happen
concurrently with the inactive queue scan will result in the free page count
being below the target at the end of a successful scan. Instead, having
vm_pageout_scan() return a value indicating success or failure is the most
straightforward fix.
Reviewed by: kib, markj
MFC after: 3 weeks
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D8111
On machines with just the wrong amount of physical memory (enough to
have a lot of bufs, but not enough to use VM_FREELIST_DMA32) it is
possible for 32-bit address limited devices to have little to no
memory left when attaching, due to potentially large vfs bio configs
consuming all memory below 4GB not protected by VM_FREELIST_ISADMA.
This causes the 32-bit devices to allocate from VM_FREELIST_ISADMA,
leaving that freelist emtpy when ISA devices need DMAable memory.
Rather than decrease VM_DMA32_NPAGES_THRESHOLD, use the time honored
technique of putting initially allocated kernel data structs
at the end (or at least not the beginning) of memory.
Since this allocation is done at boot and is wired, is not freed,
so the system is low on 32-bit (and ISA) dma'ble memory forever.
So it is a good candidate to move above 4GB.
While here, remove an unneeded round_page() from kmem_malloc's size
argument as suggested by alc. The first thing kmem_malloc() does
is a round_page(size), so there is no need to do it before the call.
Reviewed by: alc
Sponsored by: Netflix
Move PMAP_TS_REFERENCED_MAX out of the various pmap implementations and
into vm/pmap.h, and describe what its purpose is. Eliminate the archaic
"XXX" comment about its value. I don't believe that its exact value, e.g.,
5 versus 6, matters.
Update the arm64 and riscv pmap implementations of pmap_ts_referenced()
to opportunistically update the page's dirty field.
On amd64, use the PDE value already cached in a local variable rather than
dereferencing a pointer again and again.
Reviewed by: kib, markj
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D7836
The pager getpages interface allows the caller to bound the number of
readahead and readbehind pages, and vm_fault_hold() makes use of this
feature. These bounds were ignored after r305056, causing the swap pager
to potentially page in more than the specified number of pages.
Reported and reviewed by: alc
X-MFC with: r305056
Idle page zeroing has been disabled by default on all architectures since
r170816 and has some bugs that make it seemingly unusable. Specifically,
the idle-priority pagezero thread exacerbates contention for the free page
lock, and yields the CPU without releasing it in non-preemptive kernels. The
pagezero thread also does not behave correctly when superpage reservations
are enabled: its target is a function of v_free_count, which includes
reserved-but-free pages, but it is only able to zero pages belonging to the
physical memory allocator.
Reviewed by: alc, imp, kib
Differential Revision: https://reviews.freebsd.org/D7714
The swap_pager_swapoff() function uses trylock for the object lock
before pagein, which means that either i/o to md(4) over swap, or
intensive page faults over swap pager objects might prevent swapoff()
from making any progress. Then the retry < 100 check fails and machine
panics.
If trylock fails, acquire the object lock in the blockable way and
restart the hash bucket walk. Keep retries logic for now.
Reported and tested by: pho
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
Differential revision: https://reviews.freebsd.org/D7688
The removal of vm_fault_additional_pages() meant that a hard fault on
a swap-backed page would result in only that page being read in. This
change implements readahead and readbehind for the swap pager in
swap_pager_getpages(). swap_pager_haspage() is modified to return the
largest contiguous non-resident range of pages containing the requested
range.
Reviewed by: alc, kib
Tested by: pho
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D7677
In particular, fix factual, grammatical, and spelling errors in various
comments, and remove comments that are out of place in this function.
Reviewed by: kib, markj
MFC after: 3 weeks
Sponsored by: EMC / Isilon Storage Division
Differential Revision: https://reviews.freebsd.org/D7410
to r254304, we had separate functions for reclamation and laundering
(vm_pageout_scan) versus updating usage information, i.e., "reference
bits", on active pages (vm_pageout_page_stats), and we only performed
vm_req_vmdaemon(VM_SWAP_IDLE) if vm_pages_needed was true. However, since
r254303, if vm_swap_idle_enabled was "1", we have performed
vm_req_vmdaemon(VM_SWAP_IDLE) regardless of whether we are short of free
pages. This was unintended and too aggressive, so I suspect no one uses
this feature. With this change, we restore the historical behavior and
only perform vm_req_vmdaemon(VM_SWAP_IDLE) when we are short of free
pages.
Reviewed by: kib, markj
In particular, swapongeom_ev() needed event thread context when swap
pager configuration was performed under Giant and geom asserted that
Giant is not owned. Now both of the reason went away.
On the other hand, note that swpageom_release() is called from the
bio_done context, and possible close cannot be performed inline.
Also fix some minor issues. The swapgeom() function does not use the
td argument, remove it. Recheck that the vnode passed is still VCHR
and not reclaimed after the lock.
Reviewed by: mav
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
It is only needed when removing a full bucket from the per-CPU cache. The
bucket cache (uz_buckets) is protected by the zone mutex and thus the
critical section can be released before inserting into that list.
MFC after: 1 week
It's a threshold for v_free_count, which is of type u_int. This also lets
us get rid of a cast in vm_paging_needed().
Reviewed by: alc
MFC after: 1 week
optimizations into two distinct pieces. The first piece consists of the
code that should only be performed once per page fault and requires the map
to be locked. The second piece consists of the code that should be
performed each time a pager is called on an object in the shadow chain.
(This second piece expects the map to be unlocked.)
Previously, the entire implementation could be executed multiple times.
Moreover, the second and subsequent executions would occur with the map
unlocked. Usually, the ensuing unsynchronized accesses to the map were
harmless because the map was not changing. Nonetheless, it was possible for
a use-after-free error to occur, where vm_fault() wrote to a freed map
entry. This change corrects that problem.
Reported by: avg
Reviewed by: kib
MFC after: 3 days
Sponsored by: EMC / Isilon Storage Division
if vnode is VMIO. For VMIO vnodes, set BO_DEAD in vm_object_terminate().
The vnode_destroy_object(), when calling into vm_object_terminate(),
must be able to flush buffers. BO_DEAD purpose is to quickly destroy
buffers on write when the underlying vnode is not operable any more
(one example is the devfs node after geom is gone). Setting BO_DEAD
for reclaiming vnode before object is terminated is premature, and
results in unability to flush buffers with live SU dependencies from
vinvalbuf() in vm_object_terminate().
Reported by: David Cross <dcrosstech@gmail.com>
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
audit trail. This was not required for Common Criteria auditing
(which requires only that the intent to read or write be audited
at the time of open(2)), but is useful for contemporary live
analysis and forensics.
MFC after: 3 days
Sponsored by: DARPA, AFRL
read(2), write(2), dup(2), and mmap(2). This auditing is not
required by the Common Criteria (and hence was not being
performed), but is valuable in both contemporary live analysis
and forensic use cases.
MFC after: 3 days
Sponsored by: DARPA, AFRL
vm_offset_t. (This field is used to detect sequential access to the virtual
address range represented by the map entry.) There are three reasons to
make this change. First, a vm_offset_t is smaller on 32-bit architectures.
Consequently, a struct vm_map_entry is now smaller on 32-bit architectures.
Second, a vm_offset_t can be written atomically, whereas it may not be
possible to write a vm_pindex_t atomically on a 32-bit architecture. Third,
using a vm_pindex_t makes the next_read field dependent on which object in
the shadow chain is being read from.
Replace an "XXX" comment.
Reviewed by: kib
Approved by: re (gjb)
Sponsored by: EMC / Isilon Storage Division