Fix another race between vm_map_protect() and vm_map_wire().

vm_map_wire() increments entry->wire_count, after that it drops the
map lock both for faulting in the entry' pages, and for marking next
entry in the requested region as IN_TRANSITION. Only after all entries
are faulted in, MAP_ENTRY_USER_WIRE flag is set.

This makes it possible for vm_map_protect() to run while other entry'
MAP_ENTRY_IN_TRANSITION flag is handled, and vm_map_busy() lock does
not prevent it. In particular, if the call to vm_map_protect() adds
VM_PROT_WRITE to CoW entry, it would fail to call
vm_fault_copy_entry(). There are at least two consequences of the
race: the top object in the shadow chain is not populated with
writeable pages, and second, the entry eventually get contradictory
flags MAP_ENTRY_NEEDS_COPY | MAP_ENTRY_USER_WIRED with VM_PROT_WRITE
set.

Handle it by waiting for all MAP_ENTRY_IN_TRANSITION flags to go away
in vm_map_protect(), which does not drop map lock afterwards. Note
that vm_map_busy_wait() is left as is.

Reported and tested by:	pho (previous version)
Reviewed by:	Doug Moore <dougm@rice.edu>, markj
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D20091
This commit is contained in:
kib 2019-05-01 13:15:06 +00:00
parent f725651568
commit f79fcaf038

View File

@ -2347,7 +2347,7 @@ int
vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end,
vm_prot_t new_prot, boolean_t set_max)
{
vm_map_entry_t current, entry;
vm_map_entry_t current, entry, in_tran;
vm_object_t obj;
struct ucred *cred;
vm_prot_t old_prot;
@ -2355,6 +2355,8 @@ vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end,
if (start == end)
return (KERN_SUCCESS);
again:
in_tran = NULL;
vm_map_lock(map);
/*
@ -2387,6 +2389,22 @@ vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end,
vm_map_unlock(map);
return (KERN_PROTECTION_FAILURE);
}
if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0)
in_tran = entry;
}
/*
* Postpone the operation until all in transition map entries
* are stabilized. In-transition entry might already have its
* pages wired and wired_count incremented, but
* MAP_ENTRY_USER_WIRED flag not yet set, and visible to other
* threads because the map lock is dropped. In this case we
* would miss our call to vm_fault_copy_entry().
*/
if (in_tran != NULL) {
in_tran->eflags |= MAP_ENTRY_NEEDS_WAKEUP;
vm_map_unlock_and_wait(map, 0);
goto again;
}
/*