Remove most lingering references to the page lock in comments.

Finish updating comments to reflect new locking protocols introduced
over the past year.  In particular, vm_page_lock is now effectively
unused.

Reviewed by:	kib
Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D25868
This commit is contained in:
markj 2020-08-04 14:59:43 +00:00
parent 6c5625f47a
commit 69475c7955
4 changed files with 25 additions and 38 deletions

View File

@ -2675,7 +2675,7 @@ retry:
* ascending order.) (2) It is not reserved, and it is
* transitioning from free to allocated. (Conversely,
* the transition from allocated to free for managed
* pages is blocked by the page lock.) (3) It is
* pages is blocked by the page busy lock.) (3) It is
* allocated but not contained by an object and not
* wired, e.g., allocated by Xen's balloon driver.
*/
@ -3622,8 +3622,6 @@ vm_page_pqbatch_drain(void)
* Request removal of the given page from its current page
* queue. Physical removal from the queue may be deferred
* indefinitely.
*
* The page must be locked.
*/
void
vm_page_dequeue_deferred(vm_page_t m)
@ -3804,8 +3802,8 @@ vm_page_free_prep(vm_page_t m)
* Returns the given page to the free list, disassociating it
* from any VM object.
*
* The object must be locked. The page must be locked if it is
* managed.
* The object must be locked. The page must be exclusively busied if it
* belongs to an object.
*/
static void
vm_page_free_toq(vm_page_t m)
@ -3834,9 +3832,6 @@ vm_page_free_toq(vm_page_t m)
* Returns a list of pages to the free list, disassociating it
* from any VM object. In other words, this is equivalent to
* calling vm_page_free_toq() for each page of a list of VM objects.
*
* The objects must be locked. The pages must be locked if it is
* managed.
*/
void
vm_page_free_pages_toq(struct spglist *free, bool update_wire_count)
@ -3974,8 +3969,6 @@ vm_page_unwire_managed(vm_page_t m, uint8_t nqueue, bool noreuse)
* of wirings transitions to zero and the page is eligible for page out, then
* the page is added to the specified paging queue. If the released wiring
* represented the last reference to the page, the page is freed.
*
* A managed page must be locked.
*/
void
vm_page_unwire(vm_page_t m, uint8_t nqueue)
@ -4022,8 +4015,6 @@ vm_page_unwire_noq(vm_page_t m)
* Ensure that the page ends up in the specified page queue. If the page is
* active or being moved to the active queue, ensure that its act_count is
* at least ACT_INIT but do not otherwise mess with it.
*
* A managed page must be locked.
*/
static __always_inline void
vm_page_mvqueue(vm_page_t m, const uint8_t nqueue, const uint16_t nflag)
@ -4269,14 +4260,14 @@ vm_page_try_remove_write(vm_page_t m)
* vm_page_advise
*
* Apply the specified advice to the given page.
*
* The object and page must be locked.
*/
void
vm_page_advise(vm_page_t m, int advice)
{
VM_OBJECT_ASSERT_WLOCKED(m->object);
vm_page_assert_xbusied(m);
if (advice == MADV_FREE)
/*
* Mark the page clean. This will allow the page to be freed

View File

@ -95,19 +95,17 @@
* synchronized using either one of or a combination of locks. If a
* field is annotated with two of these locks then holding either is
* sufficient for read access but both are required for write access.
* The physical address of a page is used to select its page lock from
* a pool. The queue lock for a page depends on the value of its queue
* field and is described in detail below.
* The queue lock for a page depends on the value of its queue field and is
* described in detail below.
*
* The following annotations are possible:
* (A) the field must be accessed using atomic(9) and may require
* additional synchronization.
* (B) the page busy lock.
* (C) the field is immutable.
* (F) the per-domain lock for the free queues
* (F) the per-domain lock for the free queues.
* (M) Machine dependent, defined by pmap layer.
* (O) the object that the page belongs to.
* (P) the page lock.
* (Q) the page's queue lock.
*
* The busy lock is an embedded reader-writer lock that protects the
@ -270,7 +268,7 @@ struct vm_page {
* cleared only when the corresponding object's write lock is held.
*
* VPRC_BLOCKED is used to atomically block wirings via pmap lookups while
* attempting to tear down all mappings of a given page. The page lock and
* attempting to tear down all mappings of a given page. The page busy lock and
* object write lock must both be held in order to set or clear this bit.
*/
#define VPRC_BLOCKED 0x40000000u /* mappings are being removed */
@ -411,26 +409,25 @@ extern struct mtx_padalign pa_lock[];
*
* PGA_ENQUEUED is set and cleared when a page is inserted into or removed
* from a page queue, respectively. It determines whether the plinks.q field
* of the page is valid. To set or clear this flag, the queue lock for the
* page must be held: the page queue lock corresponding to the page's "queue"
* field if its value is not PQ_NONE, and the page lock otherwise.
* of the page is valid. To set or clear this flag, page's "queue" field must
* be a valid queue index, and the corresponding page queue lock must be held.
*
* PGA_DEQUEUE is set when the page is scheduled to be dequeued from a page
* queue, and cleared when the dequeue request is processed. A page may
* have PGA_DEQUEUE set and PGA_ENQUEUED cleared, for instance if a dequeue
* is requested after the page is scheduled to be enqueued but before it is
* actually inserted into the page queue. For allocated pages, the page lock
* must be held to set this flag, but it may be set by vm_page_free_prep()
* without the page lock held. The page queue lock must be held to clear the
* PGA_DEQUEUE flag.
* actually inserted into the page queue.
*
* PGA_REQUEUE is set when the page is scheduled to be enqueued or requeued
* in its page queue. The page lock must be held to set this flag, and the
* queue lock for the page must be held to clear it.
* in its page queue.
*
* PGA_REQUEUE_HEAD is a special flag for enqueuing pages near the head of
* the inactive queue, thus bypassing LRU. The page lock must be held to
* set this flag, and the queue lock for the page must be held to clear it.
* the inactive queue, thus bypassing LRU.
*
* The PGA_DEQUEUE, PGA_REQUEUE and PGA_REQUEUE_HEAD flags must be set using an
* atomic RMW operation to ensure that the "queue" field is a valid queue index,
* and the corresponding page queue lock must be held when clearing any of the
* flags.
*
* PGA_SWAP_FREE is used to defer freeing swap space to the pageout daemon
* when the context that dirties the page does not have the object write lock
@ -451,8 +448,8 @@ extern struct mtx_padalign pa_lock[];
#define PGA_QUEUE_STATE_MASK (PGA_ENQUEUED | PGA_QUEUE_OP_MASK)
/*
* Page flags. If changed at any other time than page allocation or
* freeing, the modification must be protected by the vm_page lock.
* Page flags. Updates to these flags are not synchronized, and thus they must
* be set during page allocation or free to avoid races.
*
* The PG_PCPU_CACHE flag is set at allocation time if the page was
* allocated from a per-CPU cache. It is cleared the next time that the

View File

@ -257,10 +257,9 @@ vm_pageout_end_scan(struct scan_state *ss)
* physically dequeued if the caller so requests. Otherwise, the returned
* batch may contain marker pages, and it is up to the caller to handle them.
*
* When processing the batch queue, vm_page_queue() must be used to
* determine whether the page has been logically dequeued by another thread.
* Once this check is performed, the page lock guarantees that the page will
* not be disassociated from the queue.
* When processing the batch queue, vm_pageout_defer() must be used to
* determine whether the page has been logically dequeued since the batch was
* collected.
*/
static __always_inline void
vm_pageout_collect_batch(struct scan_state *ss, const bool dequeue)

View File

@ -1307,7 +1307,7 @@ vnode_pager_generic_putpages(struct vnode *vp, vm_page_t *ma, int bytecount,
* the last page is partially invalid. In this case the filesystem
* may not properly clear the dirty bits for the entire page (which
* could be VM_PAGE_BITS_ALL due to the page having been mmap()d).
* With the page locked we are free to fix-up the dirty bits here.
* With the page busied we are free to fix up the dirty bits here.
*
* We do not under any circumstances truncate the valid bits, as
* this will screw up bogus page replacement.