If vm_page_grab() allocates a new page, the page is not inserted into

page queue even when the allocation is not wired.  It is
responsibility of the vm_page_grab() caller to ensure that the page
does not end on the vm_object queue but not on the pagedaemon queue,
which would effectively create unpageable unwired page.

In exec_map_first_page() and vm_imgact_hold_page(), activate the page
immediately after unbusying it, to avoid leak.

In the uiomove_object_page(), deactivate page before the object is
unlocked.  There is no leak, since the page is deactivated after
uiomove_fromphys() finished.  But allowing non-queued non-wired page
in the unlocked object queue makes it impossible to assert that leak
does not happen in other places.

Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
This commit is contained in:
Konstantin Belousov 2014-08-13 05:44:08 +00:00
parent 284f2cf8f7
commit 70978c93b8
3 changed files with 8 additions and 6 deletions

View File

@ -993,6 +993,7 @@ exec_map_first_page(imgp)
vm_page_xunbusy(ma[0]);
vm_page_lock(ma[0]);
vm_page_hold(ma[0]);
vm_page_activate(ma[0]);
vm_page_unlock(ma[0]);
VM_OBJECT_WUNLOCK(object);

View File

@ -197,6 +197,12 @@ uiomove_object_page(vm_object_t obj, size_t len, struct uio *uio)
vm_page_xunbusy(m);
vm_page_lock(m);
vm_page_hold(m);
if (m->queue == PQ_NONE) {
vm_page_deactivate(m);
} else {
/* Requeue to maintain LRU ordering. */
vm_page_requeue(m);
}
vm_page_unlock(m);
VM_OBJECT_WUNLOCK(obj);
error = uiomove_fromphys(&m, offset, tlen, uio);
@ -208,12 +214,6 @@ uiomove_object_page(vm_object_t obj, size_t len, struct uio *uio)
}
vm_page_lock(m);
vm_page_unhold(m);
if (m->queue == PQ_NONE) {
vm_page_deactivate(m);
} else {
/* Requeue to maintain LRU ordering. */
vm_page_requeue(m);
}
vm_page_unlock(m);
return (error);

View File

@ -251,6 +251,7 @@ vm_imgact_hold_page(vm_object_t object, vm_ooffset_t offset)
vm_page_xunbusy(m);
vm_page_lock(m);
vm_page_hold(m);
vm_page_activate(m);
vm_page_unlock(m);
out:
VM_OBJECT_WUNLOCK(object);