When vm_fault_copy_entry() is called from vm_map_protect() for a wired

entry and performs the upgrade of the entry permissions from read-only
to read-write, we must allow to search for the source pages in the
backing object, like we do in the case of forking the read-only wired
entry. For the fork case, the behaviour is allowed by src_readonly
boolean, which in fact is only used to assert that read-write case
provides all source pages in the top-level object.

Eliminate the src_readonly variable.  Allow for the copy loop to look
into the backing objects, add explicit asserts to ensure that only
read-only and upgrade case actually does.

Expand comments. Change the panic call into assert.

Reported by:	markj
Tested by:	markj, pho (previous version)
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
This commit is contained in:
kib 2014-04-27 05:19:01 +00:00
parent e4853bbc44
commit c36743d23f

@ -1240,7 +1240,7 @@ vm_fault_copy_entry(vm_map_t dst_map, vm_map_t src_map,
vm_offset_t vaddr;
vm_page_t dst_m;
vm_page_t src_m;
boolean_t src_readonly, upgrade;
boolean_t upgrade;
#ifdef lint
src_map++;
@ -1250,7 +1250,6 @@ vm_fault_copy_entry(vm_map_t dst_map, vm_map_t src_map,
src_object = src_entry->object.vm_object;
src_pindex = OFF_TO_IDX(src_entry->offset);
src_readonly = (src_entry->protection & VM_PROT_WRITE) == 0;
/*
* Create the top-level object for the destination entry. (Doesn't
@ -1321,25 +1320,33 @@ vm_fault_copy_entry(vm_map_t dst_map, vm_map_t src_map,
/*
* Find the page in the source object, and copy it in.
* (Because the source is wired down, the page will be in
* memory.)
* Because the source is wired down, the page will be
* in memory.
*/
VM_OBJECT_RLOCK(src_object);
object = src_object;
pindex = src_pindex + dst_pindex;
while ((src_m = vm_page_lookup(object, pindex)) == NULL &&
src_readonly &&
(backing_object = object->backing_object) != NULL) {
/*
* Allow fallback to backing objects if we are reading.
* Unless the source mapping is read-only or
* it is presently being upgraded from
* read-only, the first object in the shadow
* chain should provide all of the pages. In
* other words, this loop body should never be
* executed when the source mapping is already
* read/write.
*/
KASSERT((src_entry->protection & VM_PROT_WRITE) == 0 ||
upgrade,
("vm_fault_copy_entry: main object missing page"));
VM_OBJECT_RLOCK(backing_object);
pindex += OFF_TO_IDX(object->backing_object_offset);
VM_OBJECT_RUNLOCK(object);
object = backing_object;
}
if (src_m == NULL)
panic("vm_fault_copy_wired: page missing");
KASSERT(src_m != NULL, ("vm_fault_copy_entry: page missing"));
pmap_copy_page(src_m, dst_m);
VM_OBJECT_RUNLOCK(object);
dst_m->valid = VM_PAGE_BITS_ALL;