Have uiomove_object_page() keep accessed pages in the active queue.
Previously, uiomove_object_page() would maintain LRU by requeuing the accessed page. This involves acquiring one of the heavily contended page queue locks. Moreover, it is unnecessarily expensive for pages in the active queue. As of r254304 the page daemon continually performs a slow scan of the active queue, with the effect that unreferenced pages are gradually moved to the inactive queue, from which they can be reclaimed. Prior to that revision, the active queue was scanned only during shortages of free and inactive pages, meaning that unreferenced pages could get "stuck" in the queue. Thus, tmpfs was required to use the inactive queue and requeue pages in order to maintain LRU. Now that this is no longer the case, tmpfs I/O operations can use the active queue and avoid the page queue locks in most cases, instead setting PGA_REFERENCED on referenced pages to provide pseudo-LRU. Reviewed by: alc (previous version) MFC after: 2 weeks
This commit is contained in:
parent
5d3ddbb402
commit
f30cb11686
@ -209,12 +209,10 @@ uiomove_object_page(vm_object_t obj, size_t len, struct uio *uio)
|
||||
}
|
||||
vm_page_lock(m);
|
||||
vm_page_hold(m);
|
||||
if (m->queue == PQ_NONE) {
|
||||
vm_page_deactivate(m);
|
||||
} else {
|
||||
/* Requeue to maintain LRU ordering. */
|
||||
vm_page_requeue(m);
|
||||
}
|
||||
if (m->queue != PQ_ACTIVE)
|
||||
vm_page_activate(m);
|
||||
else
|
||||
vm_page_reference(m);
|
||||
vm_page_unlock(m);
|
||||
VM_OBJECT_WUNLOCK(obj);
|
||||
error = uiomove_fromphys(&m, offset, tlen, uio);
|
||||
|
Loading…
Reference in New Issue
Block a user