Consider a scenario in which one processor, call it Pt, is performing

vm_object_terminate() on a device-backed object at the same time that
another processor, call it Pa, is performing dev_pager_alloc() on the
same device.  The problem is that vm_pager_object_lookup() should not be
allowed to return a doomed object, i.e., an object with OBJ_DEAD set,
but it does.  In detail, the unfortunate sequence of events is: Pt in
vm_object_terminate() holds the doomed object's lock and sets OBJ_DEAD
on the object.  Pa in dev_pager_alloc() holds dev_pager_sx and calls
vm_pager_object_lookup(), which returns the doomed object.  Next, Pa
calls vm_object_reference(), which requires the doomed object's lock, so
Pa waits for Pt to release the doomed object's lock.  Pt proceeds to the
point in vm_object_terminate() where it releases the doomed object's
lock.  Pa is now able to complete vm_object_reference() because it can
now complete the acquisition of the doomed object's lock.  So, now the
doomed object has a reference count of one!  Pa releases dev_pager_sx
and returns the doomed object from dev_pager_alloc().  Pt now acquires
dev_pager_mtx, removes the doomed object from dev_pager_object_list,
releases dev_pager_mtx, and finally calls uma_zfree with the doomed
object.  However, the doomed object is still in use by Pa.

Repeating my key point, vm_pager_object_lookup() must not return a
doomed object.  Moreover, the test for the object's state, i.e.,
doomed or not, and the increment of the object's reference count
should be carried out atomically.

Reviewed by:	kib
Approved by:	re (kensmith)
MFC after:	3 weeks
This commit is contained in:
alc 2007-08-05 21:04:32 +00:00
parent 4ee3bb0e27
commit a1f1ba2d56
4 changed files with 20 additions and 18 deletions

View File

@ -160,10 +160,6 @@ dev_pager_alloc(void *handle, vm_ooffset_t size, vm_prot_t prot, vm_ooffset_t fo
TAILQ_INSERT_TAIL(&dev_pager_object_list, object, pager_object_list);
mtx_unlock(&dev_pager_mtx);
} else {
/*
* Gain a reference to the object.
*/
vm_object_reference(object);
if (pindex > object->size)
object->size = pindex;
}

View File

@ -101,10 +101,6 @@ phys_pager_alloc(void *handle, vm_ooffset_t size, vm_prot_t prot,
pager_object_list);
mtx_unlock(&phys_pager_mtx);
} else {
/*
* Gain a reference to the object.
*/
vm_object_reference(object);
if (pindex > object->size)
object->size = pindex;
}

View File

@ -459,9 +459,7 @@ swap_pager_alloc(void *handle, vm_ooffset_t size, vm_prot_t prot,
sx_xlock(&sw_alloc_sx);
object = vm_pager_object_lookup(NOBJLIST(handle), handle);
if (object != NULL) {
vm_object_reference(object);
} else {
if (object == NULL) {
object = vm_object_allocate(OBJT_DEFAULT, pindex);
object->handle = handle;

View File

@ -261,17 +261,29 @@ vm_pager_deallocate(object)
* vm_pager_has_page() - inline, see vm/vm_pager.h
*/
/*
* Search the specified pager object list for an object with the
* specified handle. If an object with the specified handle is found,
* increase its reference count and return it. Otherwise, return NULL.
*
* The pager object list must be locked.
*/
vm_object_t
vm_pager_object_lookup(pg_list, handle)
struct pagerlst *pg_list;
void *handle;
vm_pager_object_lookup(struct pagerlst *pg_list, void *handle)
{
vm_object_t object;
TAILQ_FOREACH(object, pg_list, pager_object_list)
if (object->handle == handle)
return (object);
return (NULL);
TAILQ_FOREACH(object, pg_list, pager_object_list) {
VM_OBJECT_LOCK(object);
if (object->handle == handle &&
(object->flags & OBJ_DEAD) == 0) {
vm_object_reference_locked(object);
VM_OBJECT_UNLOCK(object);
break;
}
VM_OBJECT_UNLOCK(object);
}
return (object);
}
/*