Commit Graph

2666 Commits

Author SHA1 Message Date
Alan Cox
3c4a24406b Push down the page queues into vm_page_cache(), vm_page_try_to_cache(), and
vm_page_try_to_free().  Consequently, push down the page queues lock into
pmap_enter_quick(), pmap_page_wired_mapped(), pmap_remove_all(), and
pmap_remove_write().

Push down the page queues lock into Xen's pmap_page_is_mapped().  (I
overlooked the Xen pmap in r207702.)

Switch to a per-processor counter for the total number of pages cached.
2010-05-08 20:34:01 +00:00
Jung-uk Kim
af394cfa36 Fix a typo in the previous commit. 2010-05-07 21:06:52 +00:00
Konstantin Belousov
af4b86b949 One more use for vm_pageout_init_marker().
Reviewed by:	alc
2010-05-07 18:57:26 +00:00
Alan Cox
97c3834772 Eliminate unnecessary page queues locking. 2010-05-07 16:22:06 +00:00
Alan Cox
03679e2334 Push down the page queues lock into vm_page_activate(). 2010-05-07 15:49:43 +00:00
Alan Cox
c1f98960a3 Update the synchronization requirements for the page usage count. 2010-05-07 06:58:53 +00:00
Alan Cox
7072188017 Eliminate acquisitions of the page queues lock that are no longer needed.
Switch to a per-processor counter for the number of pages freed during
process termination.
2010-05-07 05:23:15 +00:00
Alan Cox
9402dff3de Push down the page queues lock into vm_page_deactivate(). Eliminate an
incorrect comment.
2010-05-07 04:14:07 +00:00
Alan Cox
eb00b276ab Eliminate page queues locking around most calls to vm_page_free(). 2010-05-06 18:58:32 +00:00
Alan Cox
fd8c28bfdf Update a comment to say that access to a page's wire count is now
synchronized by the page lock.
2010-05-06 17:28:59 +00:00
Alan Cox
7024db1d40 Push down the page queues lock inside of vm_page_free_toq() and
pmap_page_is_mapped() in preparation for removing page queues locking
around calls to vm_page_free().  Setting aside the assertion that calls
pmap_page_is_mapped(), vm_page_free_toq() now acquires and holds the page
queues lock just long enough to actually add or remove the page from the
paging queues.

Update vm_page_unhold() to reflect the above change.
2010-05-06 16:39:43 +00:00
Konstantin Belousov
8c6162468b Add a helper function vm_pageout_page_lock(), similar to tegge'
vm_pageout_fallback_object_lock(), to obtain the page lock
while having page queue lock locked, and still maintain the
page position in a queue.

Use the helper to lock the page in the pageout daemon and contig launder
iterators instead of skipping the page if its lock is contested.
Skipping locked pages easily causes pagedaemon or launder to not make a
progress with page cleaning.

Proposed and reviewed by:	alc
2010-05-06 04:57:33 +00:00
Alan Cox
5ac59343be Acquire the page lock around all remaining calls to vm_page_free() on
managed pages that didn't already have that lock held.  (Freeing an
unmanaged page, such as the various pmaps use, doesn't require the page
lock.)

This allows a change in vm_page_remove()'s locking requirements.  It now
expects the page lock to be held instead of the page queues lock.
Consequently, the page queues lock is no longer required at all by callers
to vm_page_rename().

Discussed with: kib
2010-05-05 18:16:06 +00:00
Alan Cox
e3ef0d2fcf Push down the acquisition of the page queues lock into vm_page_unwire().
Update the comment describing which lock should be held on entry to
vm_page_wire().

Reviewed by:	kib
2010-05-05 03:45:46 +00:00
Alan Cox
a7283d3213 Add page locking to the vm_page_cow* functions.
Push down the acquisition and release of the page queues lock into
vm_page_wire().

Reviewed by:	kib
2010-05-04 15:55:41 +00:00
Alan Cox
0c41a69e71 Add lock assertions. 2010-05-04 05:55:19 +00:00
Konstantin Belousov
5637a59143 Handle busy status of the page in a way expected for pager_getpage().
Flush requested page, unbusy other pages, do not clear m->busy.

Reviewed by:	alc
MFC after:	1 week
2010-05-03 19:19:58 +00:00
Alan Cox
2d5d7f7f61 Acquire the page lock around vm_page_wire() in vm_page_grab().
Assert that the page lock is held in vm_page_wire().
2010-05-03 17:55:32 +00:00
Alan Cox
451033a48a It makes more sense for the object-based backend allocator to use OBJT_PHYS
objects instead of OBJT_DEFAULT objects because we never reclaim or pageout
the allocated pages.  Moreover, they are mapped with pmap_qenter(), which
creates unmanaged mappings.

Reviewed by:	kib
2010-05-03 17:35:31 +00:00
Alan Cox
746c2ddee8 The pages allocated by kmem_alloc_attr() and kmem_malloc() are unmanaged.
Consequently, neither the page lock nor the page queues lock is needed to
unwire and free them.
2010-05-03 07:08:16 +00:00
Alan Cox
9f2512bab5 Assert that the page queues lock is held in vm_page_remove() and
vm_page_unwire() only if the page is managed, i.e., pageable.
2010-05-03 07:00:50 +00:00
Alan Cox
b8d36afcfe Add page lock assertions where we access the page's hold_count. 2010-05-02 23:33:10 +00:00
Alan Cox
6c56db5c9e Eliminate an assignment that was made redundant by r207410. 2010-05-02 21:04:59 +00:00
Alan Cox
447fe2a4c6 Defer the acquisition of the page and page queues locks in
vm_pageout_object_deactivate_pages().
2010-05-02 20:46:17 +00:00
Alan Cox
f623e55269 Simplify vm_fault(). The introduction of the new page lock renders a bit of
cleverness by vm_fault() to avoid repeatedly releasing and reacquiring the
page queues lock pointless.

Reviewed by:	kib, kmacy
2010-05-02 20:24:25 +00:00
Alan Cox
ac800a8490 Correct an error in r207410: Remove an unlock of a lock that is no longer
held.
2010-05-02 18:09:33 +00:00
Alan Cox
b88b6c9d80 It makes no sense for vm_page_sleep_if_busy()'s helper, vm_page_sleep(),
to unconditionally set PG_REFERENCED on a page before sleeping.  In many
cases, it's perfectly ok for the page to disappear, i.e., be reclaimed by
the page daemon, before the caller to vm_page_sleep() is reawakened.
Instead, we now explicitly set PG_REFERENCED in those cases where having
the page persist until the caller is awakened is clearly desirable.  Note,
however, that setting PG_REFERENCED on the page is still only a hint,
and not a guarantee that the page should persist.
2010-05-02 17:33:46 +00:00
Alan Cox
f6c8c187d4 This change addresses the race condition that was introduced by the previous
revision, r207450, to this file.  Specifically, between dropping the page
queues lock in vm_contig_launder() and reacquiring it in
vm_contig_launder_page(), the page may be removed from the active or
inactive queue.  It could be wired, freed, cached, etc.  None of which
vm_contig_launder_page() is prepared for.

Reviewed by:	kib, kmacy
2010-05-02 16:44:06 +00:00
Alan Cox
9b55fc0429 Correct an error of omission in r206819. If VMFS_TLB_ALIGNED_SPACE is
specified to vm_map_find(), then retry the vm_map_findspace() if
vm_map_insert() fails because the aligned space is already partly used.

Reported by:	Neel Natu
2010-05-02 01:25:03 +00:00
Kip Macy
0ce3ba8cd5 Update locking comment above vm_page:
- re-assign page queue lock "Q"
       - assign page lock "P"
       - update several uncommented fields
       - observe that hold_count is now protected by the page lock "P"
2010-05-01 03:41:21 +00:00
Kip Macy
7bec141b12 push up dropping of the page queue lock to avoid holding it in vm_pageout_flush 2010-04-30 22:31:37 +00:00
Kip Macy
ad0c05daf9 don't call vm_pageout_flush with the page queue mutex held
Reported by: Michael Butler
2010-04-30 21:21:21 +00:00
Kip Macy
6dd8b893d6 - acquire the page lock in vm_contig_launder_page before checking page fields
- release page queue lock before calling vm_pageout_flush
2010-04-30 21:20:14 +00:00
Kip Macy
e8f263195d - don't check hold_count without the page lock held
- don't leak the page lock if m->object is NULL
  (assuming that that check will in fact even be valid when m->object is protected by the page lock)
2010-04-30 19:40:37 +00:00
Konstantin Belousov
e20e8c1558 Unlock page lock instead of recursively locking it. 2010-04-30 16:20:14 +00:00
Kip Macy
6d74d042e3 don't allow unsynchronized free in vm_page_unhold 2010-04-30 02:46:49 +00:00
Kip Macy
2965a45315 On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.

Supported by: Bitgravity Inc.

Discussed with: alc, jeffr, and kib
2010-04-30 00:46:43 +00:00
Alan Cox
82bfb965d1 Simplify the inner loop of vm_pageout_object_deactivate_pages(). Rather
than checking each page for PG_UNMANAGED, check the vm object's type.
Only OBJT_PHYS can have unmanaged pages.  Eliminate a pointless counter.
The vm object is locked, that lock is never released by the inner loop,
and the set of pages contained by the vm object is not changed by the
inner loop.  Therefore, the counter serves no purpose.
2010-04-29 16:18:45 +00:00
Konstantin Belousov
6fb8c0c117 When doing kstack swapin, read as much pages in one run as possible.
Suggested and reviewed by:	alc (previous version)
Tested by:	pho
MFC after:	2 weeks
2010-04-29 09:59:16 +00:00
Konstantin Belousov
e86a87e97e In swap pager, do not free the non-requested pages from the run if they are
wired. Kstack pages are wired, this change prepares swap pager for handling
of long runs of kstack pages.

Noted and reviewed by:	alc
Tested by:	pho
MFC after:	2 weeks
2010-04-29 09:57:25 +00:00
Alan Cox
77d6d85393 Setting PG_REFERENCED on a page at the end of vm_fault() is redundant since
the page table entry's accessed bit is either preset by the immediately
preceding call to pmap_enter() or by hardware (or software) upon return
from vm_fault() when the faulting access is restarted.
2010-04-28 06:34:47 +00:00
Alan Cox
6a2a3d7338 Change vm_object_madvise() so that it checks whether the page is invalid
or unmanaged before acquiring the page queues lock.  Neither of these
tests require that lock.  Moreover, a better way of testing if the page
is unmanaged is to test the type of vm object.  This avoids a pointless
vm_page_lookup().

MFC after:	3 weeks
2010-04-28 04:57:32 +00:00
Alan Cox
7b85f59183 Resurrect pmap_is_referenced() and use it in mincore(). Essentially,
pmap_ts_referenced() is not always appropriate for checking whether or
not pages have been referenced because it clears any reference bits
that it encounters.  For example, in mincore(), clearing the reference
bits has two negative consequences.  First, it throws off the activity
count calculations performed by the page daemon.  Specifically, a page
on which mincore() has called pmap_ts_referenced() looks less active
to the page daemon than it should.  Consequently, the page could be
deactivated prematurely by the page daemon.  Arguably, this problem
could be fixed by having mincore() duplicate the activity count
calculation on the page.  However, there is a second problem for which
that is not a solution.  In order to clear a reference on a 4KB page,
it may be necessary to demote a 2/4MB page mapping.  Thus, a mincore()
by one process can have the side effect of demoting a superpage
mapping within another process!
2010-04-24 17:32:52 +00:00
Alan Cox
5d4a7b7945 Eliminate an unnecessary call to pmap_remove_all(). If a page belongs to
an object whose reference count is zero, then that page cannot possibly
be mapped.
2010-04-20 04:16:39 +00:00
Alan Cox
7b7d5b6c58 vm_thread_swapout() can safely dirty the page before rather than after
acquiring the page queues lock.
2010-04-19 00:18:14 +00:00
Juli Mallett
ca596a25f0 o) Add a VM find-space option, VMFS_TLB_ALIGNED_SPACE, which searches the
address space for an address as aligned by the new pmap_align_tlb()
   function, which is for constraints imposed by the TLB. [1]
o) Add a kmem_alloc_nofault_space() function, which acts like
   kmem_alloc_nofault() but allows the caller to specify which find-space
   option to use. [1]
o) Use kmem_alloc_nofault_space() with VMFS_TLB_ALIGNED_SPACE to allocate the
   kernel stack address on MIPS. [1]
o) Make pmap_align_tlb() on MIPS align addresses so that they do not start on
   an odd boundary within the TLB, so that they are suitable for insertion as
   wired entries and do not have to share a TLB entry with another mapping,
   assuming they are appropriately-sized.
o) Eliminate md_realstack now that the kstack will be appropriately-aligned on
   MIPS.
o) Increase the number of guard pages to 2 so that we retain the proper
   alignment of the kstack address.

Reviewed by:	[1] alc
X-MFC-after:	Making sure alc has not come up with a better interface.
2010-04-18 22:32:07 +00:00
Alan Cox
b28889a2fc Remove a nonsensical test from vm_pageout_clean(). A page can't be in the
inactive queue and have a non-zero wire count.

Reviewed by:	kib
MFC after:	3 weeks
2010-04-18 21:29:28 +00:00
Alan Cox
4b9dd5d537 There is no justification for vm_object_split() setting PG_REFERENCED on a
page that it is going to sleep on.  Eliminate it.

MFC after:	3 weeks
2010-04-18 17:50:09 +00:00
Alan Cox
b11b56b55b In vm_object_madvise() setting PG_REFERENCED on a page before sleeping on
that page only makes sense if the advice is MADV_WILLNEED.  In that case,
the intention is to activate the page, so discouraging the page daemon
from reclaiming the page makes sense.  In contrast, in the other cases,
MADV_DONTNEED and MADV_FREE, it makes no sense whatsoever to discourage
the page daemon from reclaiming the page by setting PG_REFERENCED.

Wrap a nearby line.

Discussed with:	kib
MFC after:	3 weeks
2010-04-17 21:14:37 +00:00
Alan Cox
aefea7f519 In vm_object_backing_scan(), setting PG_REFERENCED on a page before
sleeping on that page is nonsensical.  Doing so reduces the likelihood
that the page daemon will reclaim the page before the thread waiting in
vm_object_backing_scan() is reawakened.  However, it does not guarantee
that the page is not reclaimed, so vm_object_backing_scan() restarts
after reawakening.  More importantly, this muddles the meaning of
PG_REFERENCED.  There is no reason to believe that the caller of
vm_object_backing_scan() is going to use (i.e., access) the contents of
the page.  There is especially no reason to believe that an access is
more likely because vm_object_backing_scan() had to sleep on the page.

Discussed with:	kib
MFC after:	3 weeks
2010-04-17 18:35:07 +00:00