Eliminate checks for a page having a NULL object in vm_pageout_scan()

and vm_pageout_page_stats().  These checks were recently introduced by
the first page locking commit, r207410, but they are not needed.  At
the same time, eliminate some redundant accesses to the page's object
field.  (These accesses should have neen eliminated by r207410.)

Make the assertion in vm_page_flag_set() stricter.  Specifically, only
managed pages should have PG_WRITEABLE set.

Add a comment documenting an assertion to vm_page_flag_clear().

It has long been the case that fictitious pages have their wire count
permanently set to one.  Add comments to vm_page_wire() and
vm_page_unwire() documenting this.  Add assertions to these functions
as well.

Update the comment describing vm_page_unwire().  Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate().  Move relevant parts of the old comment to
_vm_page_deactivate().

Only pages that belong to an object can be paged out.  Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues.  Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues.  Now, we
will never acquire the page queues lock for this case.  (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)

Change the panic with vm_page_unwire() to provide a more precise message.

Reviewed by:	kib@
This commit is contained in:
alc 2010-06-14 19:54:19 +00:00
parent f702a7e6bc
commit ddca4dfc3b
2 changed files with 50 additions and 39 deletions

View File

@ -487,13 +487,12 @@ vm_page_flag_set(vm_page_t m, unsigned short bits)
mtx_assert(&vm_page_queue_mtx, MA_OWNED);
/*
* For a managed page, the PG_WRITEABLE flag can be set only if
* the page is VPO_BUSY. Currently this flag is only set by
* pmap_enter().
* The PG_WRITEABLE flag can only be set if the page is managed and
* VPO_BUSY. Currently, this flag is only set by pmap_enter().
*/
KASSERT((bits & PG_WRITEABLE) == 0 ||
(m->flags & (PG_UNMANAGED | PG_FICTITIOUS)) != 0 ||
(m->oflags & VPO_BUSY) != 0, ("PG_WRITEABLE and !VPO_BUSY"));
((m->flags & (PG_UNMANAGED | PG_FICTITIOUS)) == 0 &&
(m->oflags & VPO_BUSY) != 0), ("PG_WRITEABLE and !VPO_BUSY"));
m->flags |= bits;
}
@ -502,6 +501,10 @@ vm_page_flag_clear(vm_page_t m, unsigned short bits)
{
mtx_assert(&vm_page_queue_mtx, MA_OWNED);
/*
* The PG_REFERENCED flag can only be cleared if the object
* containing the page is locked.
*/
KASSERT((bits & PG_REFERENCED) == 0 || VM_OBJECT_LOCKED(m->object),
("PG_REFERENCED and !VM_OBJECT_LOCKED"));
m->flags &= ~bits;
@ -1582,6 +1585,8 @@ vm_page_free_toq(vm_page_t m)
* another map, removing it from paging queues
* as necessary.
*
* If the page is fictitious, then its wire count must remain one.
*
* The page must be locked.
* This routine may not block.
*/
@ -1595,8 +1600,12 @@ vm_page_wire(vm_page_t m)
* it is already off the queues).
*/
vm_page_lock_assert(m, MA_OWNED);
if (m->flags & PG_FICTITIOUS)
if ((m->flags & PG_FICTITIOUS) != 0) {
KASSERT(m->wire_count == 1,
("vm_page_wire: fictitious page %p's wire count isn't one",
m));
return;
}
if (m->wire_count == 0) {
if ((m->flags & PG_UNMANAGED) == 0)
vm_pageq_remove(m);
@ -1607,32 +1616,20 @@ vm_page_wire(vm_page_t m)
}
/*
* vm_page_unwire:
* vm_page_unwire:
*
* Release one wiring of this page, potentially
* enabling it to be paged again.
* Release one wiring of the specified page, potentially enabling it to be
* paged again. If paging is enabled, then the value of the parameter
* "activate" determines to which queue the page is added. If "activate" is
* non-zero, then the page is added to the active queue. Otherwise, it is
* added to the inactive queue.
*
* Many pages placed on the inactive queue should actually go
* into the cache, but it is difficult to figure out which. What
* we do instead, if the inactive target is well met, is to put
* clean pages at the head of the inactive queue instead of the tail.
* This will cause them to be moved to the cache more quickly and
* if not actively re-referenced, freed more quickly. If we just
* stick these pages at the end of the inactive queue, heavy filesystem
* meta-data accesses can cause an unnecessary paging load on memory bound
* processes. This optimization causes one-time-use metadata to be
* reused more quickly.
* However, unless the page belongs to an object, it is not enqueued because
* it cannot be paged out.
*
* BUT, if we are in a low-memory situation we have no choice but to
* put clean pages on the cache queue.
* If a page is fictitious, then its wire count must alway be one.
*
* A number of routines use vm_page_unwire() to guarantee that the page
* will go into either the inactive or active queues, and will NEVER
* be placed in the cache - for example, just after dirtying a page.
* dirty pages in the cache are not allowed.
*
* The page must be locked.
* This routine may not block.
* A managed page must be locked.
*/
void
vm_page_unwire(vm_page_t m, int activate)
@ -1640,13 +1637,17 @@ vm_page_unwire(vm_page_t m, int activate)
if ((m->flags & PG_UNMANAGED) == 0)
vm_page_lock_assert(m, MA_OWNED);
if (m->flags & PG_FICTITIOUS)
if ((m->flags & PG_FICTITIOUS) != 0) {
KASSERT(m->wire_count == 1,
("vm_page_unwire: fictitious page %p's wire count isn't one", m));
return;
}
if (m->wire_count > 0) {
m->wire_count--;
if (m->wire_count == 0) {
atomic_subtract_int(&cnt.v_wire_count, 1);
if ((m->flags & PG_UNMANAGED) != 0)
if ((m->flags & PG_UNMANAGED) != 0 ||
m->object == NULL)
return;
vm_page_lock_queues();
if (activate)
@ -1657,14 +1658,24 @@ vm_page_unwire(vm_page_t m, int activate)
}
vm_page_unlock_queues();
}
} else {
panic("vm_page_unwire: invalid wire count: %d", m->wire_count);
}
} else
panic("vm_page_unwire: page %p's wire count is zero", m);
}
/*
* Move the specified page to the inactive queue.
*
* Many pages placed on the inactive queue should actually go
* into the cache, but it is difficult to figure out which. What
* we do instead, if the inactive target is well met, is to put
* clean pages at the head of the inactive queue instead of the tail.
* This will cause them to be moved to the cache more quickly and
* if not actively re-referenced, reclaimed more quickly. If we just
* stick these pages at the end of the inactive queue, heavy filesystem
* meta-data accesses can cause an unnecessary paging load on memory bound
* processes. This optimization causes one-time-use metadata to be
* reused more quickly.
*
* Normally athead is 0 resulting in LRU operation. athead is set
* to 1 if we want this page to be 'as if it were placed in the cache',
* except without unmapping it from the process address space.

View File

@ -805,7 +805,7 @@ vm_pageout_scan(int pass)
/*
* A held page may be undergoing I/O, so skip it.
*/
if (m->hold_count || (object = m->object) == NULL) {
if (m->hold_count) {
vm_page_unlock(m);
vm_page_requeue(m);
addl_page_shortage++;
@ -816,6 +816,7 @@ vm_pageout_scan(int pass)
* Don't mess with busy pages, keep in the front of the
* queue, most likely are being paged out.
*/
object = m->object;
if (!VM_OBJECT_TRYLOCK(object) &&
(!vm_pageout_fallback_object_lock(m, &next) ||
m->hold_count != 0)) {
@ -1127,17 +1128,16 @@ vm_pageout_scan(int pass)
("vm_pageout_scan: page %p isn't active", m));
next = TAILQ_NEXT(m, pageq);
object = m->object;
if ((m->flags & PG_MARKER) != 0) {
m = next;
continue;
}
if (!vm_pageout_page_lock(m, &next) ||
(object = m->object) == NULL) {
if (!vm_pageout_page_lock(m, &next)) {
vm_page_unlock(m);
m = next;
continue;
}
object = m->object;
if (!VM_OBJECT_TRYLOCK(object) &&
!vm_pageout_fallback_object_lock(m, &next)) {
VM_OBJECT_UNLOCK(object);
@ -1397,12 +1397,12 @@ vm_pageout_page_stats()
continue;
}
vm_page_lock_assert(m, MA_NOTOWNED);
if (!vm_pageout_page_lock(m, &next) ||
(object = m->object) == NULL) {
if (!vm_pageout_page_lock(m, &next)) {
vm_page_unlock(m);
m = next;
continue;
}
object = m->object;
if (!VM_OBJECT_TRYLOCK(object) &&
!vm_pageout_fallback_object_lock(m, &next)) {
VM_OBJECT_UNLOCK(object);