2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
1994-05-25 09:21:21 +00:00
|
|
|
* Copyright (c) 1991 Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2008-03-18 06:52:15 +00:00
|
|
|
* Copyright (c) 1998 Matthew Dillon. All Rights Reserved.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* The Mach Operating System project at Carnegie-Mellon University.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1994-05-25 09:21:21 +00:00
|
|
|
* from: @(#)vm_page.c 7.4 (Berkeley) 5/7/91
|
|
|
|
*/
|
|
|
|
|
2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1987, 1990 Carnegie-Mellon University.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Authors: Avadis Tevanian, Jr., Michael Wayne Young
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Permission to use, copy, modify and distribute this software and
|
|
|
|
* its documentation is hereby granted, provided that both the copyright
|
|
|
|
* notice and this permission notice appear in all copies of the
|
|
|
|
* software, derivative works or modified versions, and any portions
|
|
|
|
* thereof, and that both notices appear in supporting documentation.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
|
|
|
* CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
|
|
|
|
* CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
|
1994-05-24 10:09:53 +00:00
|
|
|
* FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Carnegie Mellon requests users of this software to return to
|
|
|
|
*
|
|
|
|
* Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
|
|
|
|
* School of Computer Science
|
|
|
|
* Carnegie Mellon University
|
|
|
|
* Pittsburgh PA 15213-3890
|
|
|
|
*
|
|
|
|
* any improvements or extensions that they make and grant Carnegie the
|
|
|
|
* rights to redistribute these changes.
|
|
|
|
*/
|
|
|
|
|
2001-07-04 23:27:09 +00:00
|
|
|
/*
|
|
|
|
* GENERAL RULES ON VM_PAGE MANIPULATION
|
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* - A page queue lock is required when adding or removing a page from a
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
* page queue regardless of other locks or the busy state of a page.
|
2001-07-04 23:27:09 +00:00
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* * In general, no thread besides the page daemon can acquire or
|
|
|
|
* hold more than one page queue lock at a time.
|
|
|
|
*
|
|
|
|
* * The page daemon can acquire and hold any pair of page queue
|
|
|
|
* locks in any order.
|
|
|
|
*
|
2013-03-09 21:32:24 +00:00
|
|
|
* - The object lock is required when inserting or removing
|
2011-09-06 10:30:11 +00:00
|
|
|
* pages from an object (vm_page_insert() or vm_page_remove()).
|
|
|
|
*
|
2001-07-04 23:27:09 +00:00
|
|
|
*/
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Resident memory management module.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 23:50:51 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2007-12-29 19:53:04 +00:00
|
|
|
#include "opt_vm.h"
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
2001-05-01 08:13:21 +00:00
|
|
|
#include <sys/lock.h>
|
2005-08-12 12:24:19 +00:00
|
|
|
#include <sys/kernel.h>
|
2009-01-03 13:24:08 +00:00
|
|
|
#include <sys/limits.h>
|
1996-01-27 00:13:33 +00:00
|
|
|
#include <sys/malloc.h>
|
2013-06-10 01:48:21 +00:00
|
|
|
#include <sys/mman.h>
|
2010-05-16 19:25:56 +00:00
|
|
|
#include <sys/msgbuf.h>
|
2001-05-22 07:01:11 +00:00
|
|
|
#include <sys/mutex.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <sys/proc.h>
|
2013-03-09 02:32:23 +00:00
|
|
|
#include <sys/rwlock.h>
|
2005-08-12 12:24:19 +00:00
|
|
|
#include <sys/sysctl.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <sys/vmmeter.h>
|
1997-12-29 00:25:11 +00:00
|
|
|
#include <sys/vnode.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <vm/vm.h>
|
2010-04-30 00:46:43 +00:00
|
|
|
#include <vm/pmap.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
1995-03-16 18:17:34 +00:00
|
|
|
#include <vm/vm_kern.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_object.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm_page.h>
|
|
|
|
#include <vm/vm_pageout.h>
|
1999-01-21 08:29:12 +00:00
|
|
|
#include <vm/vm_pager.h>
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
#include <vm/vm_phys.h>
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
#include <vm/vm_radix.h>
|
2007-12-29 19:53:04 +00:00
|
|
|
#include <vm/vm_reserv.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_extern.h>
|
2002-03-19 09:11:49 +00:00
|
|
|
#include <vm/uma.h>
|
|
|
|
#include <vm/uma_int.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
Introduce minidumps. Full physical memory crash dumps are still available
via the debug.minidump sysctl and tunable.
Traditional dumps store all physical memory. This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm. These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.
Minidumps invert the process. Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.
amd64 has a direct map region that things like UMA use. Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump. Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.
Dumps are a custom format. At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap. libkvm can now conveniently access the kvm
page table entries.
Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump. While this is a best case, I expect minidumps
to be in the 100MB-500MB range. Obviously, never larger than physical
memory of course.
minidumps are on by default. It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.
Both minidumps and regular dumps are supported on the same machine.
2006-04-21 04:24:50 +00:00
|
|
|
#include <machine/md_var.h>
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Associated with page of user-allocatable memory is a
|
|
|
|
* page structure.
|
|
|
|
*/
|
1996-01-19 04:00:31 +00:00
|
|
|
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
struct vm_domain vm_dom[MAXMEMDOM];
|
2012-10-31 18:07:18 +00:00
|
|
|
struct mtx_padalign vm_page_queue_free_mtx;
|
2002-07-04 22:07:37 +00:00
|
|
|
|
2012-10-31 18:07:18 +00:00
|
|
|
struct mtx_padalign pa_lock[PA_LOCK_COUNT];
|
2010-04-30 00:46:43 +00:00
|
|
|
|
2012-05-12 20:03:06 +00:00
|
|
|
vm_page_t vm_page_array;
|
|
|
|
long vm_page_array_size;
|
|
|
|
long first_page;
|
|
|
|
int vm_page_zero_count;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2005-08-12 12:24:19 +00:00
|
|
|
static int boot_pages = UMA_BOOT_PAGES;
|
2014-06-28 03:56:17 +00:00
|
|
|
SYSCTL_INT(_vm, OID_AUTO, boot_pages, CTLFLAG_RDTUN, &boot_pages, 0,
|
2005-08-12 12:24:19 +00:00
|
|
|
"number of pages allocated for bootstrapping the VM system");
|
|
|
|
|
2012-05-20 14:33:28 +00:00
|
|
|
static int pa_tryrelock_restart;
|
2011-01-08 22:45:22 +00:00
|
|
|
SYSCTL_INT(_vm, OID_AUTO, tryrelock_restart, CTLFLAG_RD,
|
|
|
|
&pa_tryrelock_restart, 0, "Number of tryrelock restarts");
|
|
|
|
|
2011-03-11 07:07:48 +00:00
|
|
|
static uma_zone_t fakepg_zone;
|
|
|
|
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
static struct vnode *vm_page_alloc_init(vm_page_t m);
|
2013-08-09 11:28:55 +00:00
|
|
|
static void vm_page_cache_turn_free(vm_page_t m);
|
2011-11-05 08:20:32 +00:00
|
|
|
static void vm_page_clear_dirty_mask(vm_page_t m, vm_page_bits_t pagebits);
|
2014-06-16 18:15:27 +00:00
|
|
|
static void vm_page_enqueue(uint8_t queue, vm_page_t m);
|
2011-03-11 07:07:48 +00:00
|
|
|
static void vm_page_init_fakepg(void *dummy);
|
2013-08-09 11:28:55 +00:00
|
|
|
static int vm_page_insert_after(vm_page_t m, vm_object_t object,
|
2013-05-12 16:50:18 +00:00
|
|
|
vm_pindex_t pindex, vm_page_t mpred);
|
2013-08-09 11:28:55 +00:00
|
|
|
static void vm_page_insert_radixdone(vm_page_t m, vm_object_t object,
|
|
|
|
vm_page_t mpred);
|
2011-03-11 07:07:48 +00:00
|
|
|
|
|
|
|
SYSINIT(vm_page, SI_SUB_VM, SI_ORDER_SECOND, vm_page_init_fakepg, NULL);
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_page_init_fakepg(void *dummy)
|
|
|
|
{
|
|
|
|
|
|
|
|
fakepg_zone = uma_zcreate("fakepg", sizeof(struct vm_page), NULL, NULL,
|
|
|
|
NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_NOFREE | UMA_ZONE_VM);
|
|
|
|
}
|
2008-03-18 06:52:15 +00:00
|
|
|
|
2008-09-26 18:44:40 +00:00
|
|
|
/* Make sure that u_long is at least 64 bits when PAGE_SIZE is 32K. */
|
|
|
|
#if PAGE_SIZE == 32768
|
|
|
|
#ifdef CTASSERT
|
|
|
|
CTASSERT(sizeof(u_long) >= 8);
|
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
2010-04-30 00:46:43 +00:00
|
|
|
/*
|
|
|
|
* Try to acquire a physical address lock while a pmap is locked. If we
|
|
|
|
* fail to trylock we unlock and lock the pmap directly and cache the
|
|
|
|
* locked pa in *locked. The caller should then restart their loop in case
|
|
|
|
* the virtual to physical mapping has changed.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_page_pa_tryrelock(pmap_t pmap, vm_paddr_t pa, vm_paddr_t *locked)
|
|
|
|
{
|
|
|
|
vm_paddr_t lockpa;
|
|
|
|
|
|
|
|
lockpa = *locked;
|
|
|
|
*locked = pa;
|
|
|
|
if (lockpa) {
|
|
|
|
PA_LOCK_ASSERT(lockpa, MA_OWNED);
|
|
|
|
if (PA_LOCKPTR(pa) == PA_LOCKPTR(lockpa))
|
|
|
|
return (0);
|
|
|
|
PA_UNLOCK(lockpa);
|
|
|
|
}
|
|
|
|
if (PA_TRYLOCK(pa))
|
|
|
|
return (0);
|
|
|
|
PMAP_UNLOCK(pmap);
|
2011-01-08 22:45:22 +00:00
|
|
|
atomic_add_int(&pa_tryrelock_restart, 1);
|
2010-04-30 00:46:43 +00:00
|
|
|
PA_LOCK(pa);
|
|
|
|
PMAP_LOCK(pmap);
|
2011-02-17 15:36:29 +00:00
|
|
|
return (EAGAIN);
|
2010-04-30 00:46:43 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_set_page_size:
|
|
|
|
*
|
|
|
|
* Sets the page size, perhaps based upon the memory
|
|
|
|
* size. Must be called before any use of page-size
|
|
|
|
* dependent functions.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_set_page_size(void)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2014-03-22 10:26:09 +00:00
|
|
|
if (vm_cnt.v_page_size == 0)
|
|
|
|
vm_cnt.v_page_size = PAGE_SIZE;
|
|
|
|
if (((vm_cnt.v_page_size - 1) & vm_cnt.v_page_size) != 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("vm_set_page_size: page size not a power of two");
|
|
|
|
}
|
|
|
|
|
2006-06-23 16:44:24 +00:00
|
|
|
/*
|
|
|
|
* vm_page_blacklist_lookup:
|
|
|
|
*
|
|
|
|
* See if a physical address in this page has been listed
|
|
|
|
* in the blacklist tunable. Entries in the tunable are
|
|
|
|
* separated by spaces or commas. If an invalid integer is
|
|
|
|
* encountered then the rest of the string is skipped.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
vm_page_blacklist_lookup(char *list, vm_paddr_t pa)
|
|
|
|
{
|
|
|
|
vm_paddr_t bad;
|
|
|
|
char *cp, *pos;
|
|
|
|
|
|
|
|
for (pos = list; *pos != '\0'; pos = cp) {
|
|
|
|
bad = strtoq(pos, &cp, 0);
|
|
|
|
if (*cp != '\0') {
|
|
|
|
if (*cp == ' ' || *cp == ',') {
|
|
|
|
cp++;
|
|
|
|
if (cp == pos)
|
|
|
|
continue;
|
|
|
|
} else
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (pa == trunc_page(bad))
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
static void
|
|
|
|
vm_page_domain_init(struct vm_domain *vmd)
|
|
|
|
{
|
|
|
|
struct vm_pagequeue *pq;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
*__DECONST(char **, &vmd->vmd_pagequeues[PQ_INACTIVE].pq_name) =
|
|
|
|
"vm inactive pagequeue";
|
|
|
|
*__DECONST(int **, &vmd->vmd_pagequeues[PQ_INACTIVE].pq_vcnt) =
|
2014-03-22 10:26:09 +00:00
|
|
|
&vm_cnt.v_inactive_count;
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
*__DECONST(char **, &vmd->vmd_pagequeues[PQ_ACTIVE].pq_name) =
|
|
|
|
"vm active pagequeue";
|
|
|
|
*__DECONST(int **, &vmd->vmd_pagequeues[PQ_ACTIVE].pq_vcnt) =
|
2014-03-22 10:26:09 +00:00
|
|
|
&vm_cnt.v_active_count;
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vmd->vmd_page_count = 0;
|
|
|
|
vmd->vmd_free_count = 0;
|
|
|
|
vmd->vmd_segs = 0;
|
|
|
|
vmd->vmd_oom = FALSE;
|
|
|
|
vmd->vmd_pass = 0;
|
|
|
|
for (i = 0; i < PQ_COUNT; i++) {
|
|
|
|
pq = &vmd->vmd_pagequeues[i];
|
|
|
|
TAILQ_INIT(&pq->pq_pl);
|
|
|
|
mtx_init(&pq->pq_mutex, pq->pq_name, "vm pagequeue",
|
|
|
|
MTX_DEF | MTX_DUPOK);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_page_startup:
|
|
|
|
*
|
|
|
|
* Initializes the resident memory module.
|
|
|
|
*
|
|
|
|
* Allocates memory for the page cells, and
|
|
|
|
* for the object/offset-to-page hash table headers.
|
|
|
|
* Each page cell is initialized and placed on the free list.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
vm_offset_t
|
2004-04-04 23:33:36 +00:00
|
|
|
vm_page_startup(vm_offset_t vaddr)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-07-04 19:00:13 +00:00
|
|
|
vm_offset_t mapped;
|
2003-03-25 00:07:06 +00:00
|
|
|
vm_paddr_t page_range;
|
|
|
|
vm_paddr_t new_end;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
int i;
|
2003-03-25 00:07:06 +00:00
|
|
|
vm_paddr_t pa;
|
|
|
|
vm_paddr_t last_pa;
|
2006-06-23 16:44:24 +00:00
|
|
|
char *list;
|
2003-03-25 00:07:06 +00:00
|
|
|
vm_paddr_t end;
|
|
|
|
vm_paddr_t biggestsize;
|
2006-12-08 08:44:47 +00:00
|
|
|
vm_paddr_t low_water, high_water;
|
2003-03-25 00:07:06 +00:00
|
|
|
int biggestone;
|
1994-05-25 09:21:21 +00:00
|
|
|
|
|
|
|
biggestsize = 0;
|
|
|
|
biggestone = 0;
|
|
|
|
vaddr = round_page(vaddr);
|
|
|
|
|
|
|
|
for (i = 0; phys_avail[i + 1]; i += 2) {
|
|
|
|
phys_avail[i] = round_page(phys_avail[i]);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
phys_avail[i + 1] = trunc_page(phys_avail[i + 1]);
|
1994-05-25 09:21:21 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2014-11-15 23:40:44 +00:00
|
|
|
#ifdef XEN
|
|
|
|
/*
|
|
|
|
* There is no obvious reason why i386 PV Xen needs vm_page structs
|
|
|
|
* created for these pseudo-physical addresses. XXX
|
|
|
|
*/
|
|
|
|
vm_phys_add_seg(0, phys_avail[0]);
|
|
|
|
#endif
|
|
|
|
|
2006-12-08 08:44:47 +00:00
|
|
|
low_water = phys_avail[0];
|
|
|
|
high_water = phys_avail[1];
|
|
|
|
|
2014-11-15 23:40:44 +00:00
|
|
|
for (i = 0; i < vm_phys_nsegs; i++) {
|
|
|
|
if (vm_phys_segs[i].start < low_water)
|
|
|
|
low_water = vm_phys_segs[i].start;
|
|
|
|
if (vm_phys_segs[i].end > high_water)
|
|
|
|
high_water = vm_phys_segs[i].end;
|
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
for (i = 0; phys_avail[i + 1]; i += 2) {
|
2003-03-25 00:07:06 +00:00
|
|
|
vm_paddr_t size = phys_avail[i + 1] - phys_avail[i];
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
if (size > biggestsize) {
|
|
|
|
biggestone = i;
|
|
|
|
biggestsize = size;
|
|
|
|
}
|
2006-12-08 08:44:47 +00:00
|
|
|
if (phys_avail[i] < low_water)
|
|
|
|
low_water = phys_avail[i];
|
|
|
|
if (phys_avail[i + 1] > high_water)
|
|
|
|
high_water = phys_avail[i + 1];
|
1994-05-25 09:21:21 +00:00
|
|
|
}
|
|
|
|
|
2001-03-01 19:21:24 +00:00
|
|
|
end = phys_avail[biggestone+1];
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-07-04 22:07:37 +00:00
|
|
|
/*
|
2012-06-27 03:45:25 +00:00
|
|
|
* Initialize the page and queue locks.
|
2002-07-04 22:07:37 +00:00
|
|
|
*/
|
2012-06-27 03:45:25 +00:00
|
|
|
mtx_init(&vm_page_queue_free_mtx, "vm page free queue", NULL, MTX_DEF);
|
2010-04-30 00:46:43 +00:00
|
|
|
for (i = 0; i < PA_LOCK_COUNT; i++)
|
2012-10-31 18:07:18 +00:00
|
|
|
mtx_init(&pa_lock[i], "vm page", NULL, MTX_DEF);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
for (i = 0; i < vm_ndomains; i++)
|
|
|
|
vm_page_domain_init(&vm_dom[i]);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2002-11-03 22:20:42 +00:00
|
|
|
* Allocate memory for use when boot strapping the kernel memory
|
|
|
|
* allocator.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2005-08-12 12:24:19 +00:00
|
|
|
new_end = end - (boot_pages * UMA_SLAB_SIZE);
|
2002-03-19 09:11:49 +00:00
|
|
|
new_end = trunc_page(new_end);
|
|
|
|
mapped = pmap_map(&vaddr, new_end, end,
|
|
|
|
VM_PROT_READ | VM_PROT_WRITE);
|
2005-10-08 21:03:54 +00:00
|
|
|
bzero((void *)mapped, end - new_end);
|
|
|
|
uma_startup((void *)mapped, boot_pages);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2010-11-07 03:09:02 +00:00
|
|
|
#if defined(__amd64__) || defined(__i386__) || defined(__arm__) || \
|
|
|
|
defined(__mips__)
|
Introduce minidumps. Full physical memory crash dumps are still available
via the debug.minidump sysctl and tunable.
Traditional dumps store all physical memory. This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm. These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.
Minidumps invert the process. Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.
amd64 has a direct map region that things like UMA use. Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump. Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.
Dumps are a custom format. At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap. libkvm can now conveniently access the kvm
page table entries.
Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump. While this is a best case, I expect minidumps
to be in the 100MB-500MB range. Obviously, never larger than physical
memory of course.
minidumps are on by default. It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.
Both minidumps and regular dumps are supported on the same machine.
2006-04-21 04:24:50 +00:00
|
|
|
/*
|
|
|
|
* Allocate a bitmap to indicate that a random physical page
|
|
|
|
* needs to be included in a minidump.
|
|
|
|
*
|
|
|
|
* The amd64 port needs this to indicate which direct map pages
|
|
|
|
* need to be dumped, via calls to dump_add_page()/dump_drop_page().
|
|
|
|
*
|
|
|
|
* However, i386 still needs this workspace internally within the
|
|
|
|
* minidump code. In theory, they are not needed on i386, but are
|
|
|
|
* included should the sf_buf code decide to use them.
|
|
|
|
*/
|
2010-12-01 03:35:19 +00:00
|
|
|
last_pa = 0;
|
|
|
|
for (i = 0; dump_avail[i + 1] != 0; i += 2)
|
|
|
|
if (dump_avail[i + 1] > last_pa)
|
|
|
|
last_pa = dump_avail[i + 1];
|
|
|
|
page_range = last_pa / PAGE_SIZE;
|
Introduce minidumps. Full physical memory crash dumps are still available
via the debug.minidump sysctl and tunable.
Traditional dumps store all physical memory. This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm. These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.
Minidumps invert the process. Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.
amd64 has a direct map region that things like UMA use. Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump. Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.
Dumps are a custom format. At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap. libkvm can now conveniently access the kvm
page table entries.
Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump. While this is a best case, I expect minidumps
to be in the 100MB-500MB range. Obviously, never larger than physical
memory of course.
minidumps are on by default. It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.
Both minidumps and regular dumps are supported on the same machine.
2006-04-21 04:24:50 +00:00
|
|
|
vm_page_dump_size = round_page(roundup2(page_range, NBBY) / NBBY);
|
|
|
|
new_end -= vm_page_dump_size;
|
|
|
|
vm_page_dump = (void *)(uintptr_t)pmap_map(&vaddr, new_end,
|
|
|
|
new_end + vm_page_dump_size, VM_PROT_READ | VM_PROT_WRITE);
|
|
|
|
bzero((void *)vm_page_dump, vm_page_dump_size);
|
2010-05-16 19:25:56 +00:00
|
|
|
#endif
|
|
|
|
#ifdef __amd64__
|
|
|
|
/*
|
|
|
|
* Request that the physical pages underlying the message buffer be
|
|
|
|
* included in a crash dump. Since the message buffer is accessed
|
|
|
|
* through the direct map, they are not automatically included.
|
|
|
|
*/
|
|
|
|
pa = DMAP_TO_PHYS((vm_offset_t)msgbufp->msg_ptr);
|
2011-01-21 10:26:26 +00:00
|
|
|
last_pa = pa + round_page(msgbufsize);
|
2010-05-16 19:25:56 +00:00
|
|
|
while (pa < last_pa) {
|
|
|
|
dump_add_page(pa);
|
|
|
|
pa += PAGE_SIZE;
|
|
|
|
}
|
Introduce minidumps. Full physical memory crash dumps are still available
via the debug.minidump sysctl and tunable.
Traditional dumps store all physical memory. This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm. These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.
Minidumps invert the process. Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.
amd64 has a direct map region that things like UMA use. Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump. Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.
Dumps are a custom format. At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap. libkvm can now conveniently access the kvm
page table entries.
Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump. While this is a best case, I expect minidumps
to be in the 100MB-500MB range. Obviously, never larger than physical
memory of course.
minidumps are on by default. It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.
Both minidumps and regular dumps are supported on the same machine.
2006-04-21 04:24:50 +00:00
|
|
|
#endif
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Compute the number of pages of memory that will be available for
|
|
|
|
* use (taking into account the overhead of a page structure per
|
|
|
|
* page).
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2006-12-08 08:44:47 +00:00
|
|
|
first_page = low_water / PAGE_SIZE;
|
2007-05-05 19:50:28 +00:00
|
|
|
#ifdef VM_PHYSSEG_SPARSE
|
|
|
|
page_range = 0;
|
2014-11-15 23:40:44 +00:00
|
|
|
for (i = 0; i < vm_phys_nsegs; i++) {
|
|
|
|
page_range += atop(vm_phys_segs[i].end -
|
|
|
|
vm_phys_segs[i].start);
|
|
|
|
}
|
2007-05-05 19:50:28 +00:00
|
|
|
for (i = 0; phys_avail[i + 1] != 0; i += 2)
|
|
|
|
page_range += atop(phys_avail[i + 1] - phys_avail[i]);
|
|
|
|
#elif defined(VM_PHYSSEG_DENSE)
|
2006-12-08 08:44:47 +00:00
|
|
|
page_range = high_water / PAGE_SIZE - first_page;
|
2007-05-05 19:50:28 +00:00
|
|
|
#else
|
|
|
|
#error "Either VM_PHYSSEG_DENSE or VM_PHYSSEG_SPARSE must be defined."
|
|
|
|
#endif
|
2001-03-01 19:21:24 +00:00
|
|
|
end = new_end;
|
2001-03-07 05:29:21 +00:00
|
|
|
|
2003-12-22 02:04:08 +00:00
|
|
|
/*
|
|
|
|
* Reserve an unmapped guard page to trap access to vm_page_array[-1].
|
|
|
|
*/
|
|
|
|
vaddr += PAGE_SIZE;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Initialize the mem entry structures now, and put them in the free
|
|
|
|
* queue.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2001-03-01 19:21:24 +00:00
|
|
|
new_end = trunc_page(end - page_range * sizeof(struct vm_page));
|
2001-03-07 05:29:21 +00:00
|
|
|
mapped = pmap_map(&vaddr, new_end, end,
|
2001-03-01 19:21:24 +00:00
|
|
|
VM_PROT_READ | VM_PROT_WRITE);
|
2001-03-07 05:29:21 +00:00
|
|
|
vm_page_array = (vm_page_t) mapped;
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
/*
|
|
|
|
* Allocate memory for the reservation management system's data
|
|
|
|
* structures.
|
|
|
|
*/
|
|
|
|
new_end = vm_reserv_startup(&vaddr, new_end, high_water);
|
|
|
|
#endif
|
2010-12-09 07:39:06 +00:00
|
|
|
#if defined(__amd64__) || defined(__mips__)
|
2006-05-31 22:55:23 +00:00
|
|
|
/*
|
2010-12-03 04:39:48 +00:00
|
|
|
* pmap_map on amd64 and mips can come out of the direct-map, not kvm
|
|
|
|
* like i386, so the pages must be tracked for a crashdump to include
|
|
|
|
* this data. This includes the vm_page_array and the early UMA
|
|
|
|
* bootstrap pages.
|
2006-05-31 22:55:23 +00:00
|
|
|
*/
|
|
|
|
for (pa = new_end; pa < phys_avail[biggestone + 1]; pa += PAGE_SIZE)
|
|
|
|
dump_add_page(pa);
|
|
|
|
#endif
|
2003-03-17 03:16:00 +00:00
|
|
|
phys_avail[biggestone + 1] = new_end;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2014-11-15 23:40:44 +00:00
|
|
|
/*
|
|
|
|
* Add physical memory segments corresponding to the available
|
|
|
|
* physical pages.
|
|
|
|
*/
|
|
|
|
for (i = 0; phys_avail[i + 1] != 0; i += 2)
|
|
|
|
vm_phys_add_seg(phys_avail[i], phys_avail[i + 1]);
|
|
|
|
|
2006-11-08 19:11:54 +00:00
|
|
|
/*
|
|
|
|
* Clear all of the page structures
|
|
|
|
*/
|
|
|
|
bzero((caddr_t) vm_page_array, page_range * sizeof(struct vm_page));
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
for (i = 0; i < page_range; i++)
|
|
|
|
vm_page_array[i].order = VM_NFREEORDER;
|
2006-11-08 19:11:54 +00:00
|
|
|
vm_page_array_size = page_range;
|
|
|
|
|
1999-03-19 05:21:03 +00:00
|
|
|
/*
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
* Initialize the physical memory allocator.
|
|
|
|
*/
|
|
|
|
vm_phys_init();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add every available physical page that is not blacklisted to
|
|
|
|
* the free lists.
|
1999-03-19 05:21:03 +00:00
|
|
|
*/
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_page_count = 0;
|
|
|
|
vm_cnt.v_free_count = 0;
|
2014-10-16 18:04:43 +00:00
|
|
|
list = kern_getenv("vm.blacklist");
|
2006-11-08 18:43:47 +00:00
|
|
|
for (i = 0; phys_avail[i + 1] != 0; i += 2) {
|
2001-03-01 19:21:24 +00:00
|
|
|
pa = phys_avail[i];
|
2003-03-17 03:16:00 +00:00
|
|
|
last_pa = phys_avail[i + 1];
|
2006-11-08 18:43:47 +00:00
|
|
|
while (pa < last_pa) {
|
2006-06-23 16:44:24 +00:00
|
|
|
if (list != NULL &&
|
|
|
|
vm_page_blacklist_lookup(list, pa))
|
|
|
|
printf("Skipping page with pa 0x%jx\n",
|
|
|
|
(uintmax_t)pa);
|
|
|
|
else
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
vm_phys_add_page(pa);
|
1994-05-25 09:21:21 +00:00
|
|
|
pa += PAGE_SIZE;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
2006-06-23 16:44:24 +00:00
|
|
|
freeenv(list);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
/*
|
|
|
|
* Initialize the reservation management system.
|
|
|
|
*/
|
|
|
|
vm_reserv_init();
|
|
|
|
#endif
|
2001-03-07 05:29:21 +00:00
|
|
|
return (vaddr);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2011-09-06 10:30:11 +00:00
|
|
|
void
|
|
|
|
vm_page_reference(vm_page_t m)
|
|
|
|
{
|
|
|
|
|
|
|
|
vm_page_aflag_set(m, PGA_REFERENCED);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
/*
|
|
|
|
* vm_page_busy_downgrade:
|
|
|
|
*
|
|
|
|
* Downgrade an exclusive busy page into a single shared busy page.
|
|
|
|
*/
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_busy_downgrade(vm_page_t m)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2013-08-09 11:11:11 +00:00
|
|
|
u_int x;
|
2004-10-24 23:53:47 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_assert_xbusied(m);
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
x = m->busy_lock;
|
|
|
|
x &= VPB_BIT_WAITERS;
|
|
|
|
if (atomic_cmpset_rel_int(&m->busy_lock,
|
|
|
|
VPB_SINGLE_EXCLUSIVER | x, VPB_SHARERS_WORD(1) | x))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_sbusied:
|
|
|
|
*
|
|
|
|
* Return a positive value if the page is shared busied, 0 otherwise.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_page_sbusied(vm_page_t m)
|
|
|
|
{
|
|
|
|
u_int x;
|
|
|
|
|
|
|
|
x = m->busy_lock;
|
|
|
|
return ((x & VPB_BIT_SHARED) != 0 && x != VPB_UNBUSIED);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2013-08-09 11:11:11 +00:00
|
|
|
* vm_page_sunbusy:
|
2001-07-04 20:15:18 +00:00
|
|
|
*
|
2013-08-09 11:11:11 +00:00
|
|
|
* Shared unbusy a page.
|
2001-07-04 20:15:18 +00:00
|
|
|
*/
|
|
|
|
void
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_sunbusy(vm_page_t m)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2013-08-09 11:11:11 +00:00
|
|
|
u_int x;
|
2004-10-25 19:52:44 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_assert_sbusied(m);
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
x = m->busy_lock;
|
|
|
|
if (VPB_SHARERS(x) > 1) {
|
|
|
|
if (atomic_cmpset_int(&m->busy_lock, x,
|
|
|
|
x - VPB_ONE_SHARER))
|
|
|
|
break;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if ((x & VPB_BIT_WAITERS) == 0) {
|
|
|
|
KASSERT(x == VPB_SHARERS_WORD(1),
|
|
|
|
("vm_page_sunbusy: invalid lock state"));
|
|
|
|
if (atomic_cmpset_int(&m->busy_lock,
|
|
|
|
VPB_SHARERS_WORD(1), VPB_UNBUSIED))
|
|
|
|
break;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
KASSERT(x == (VPB_SHARERS_WORD(1) | VPB_BIT_WAITERS),
|
|
|
|
("vm_page_sunbusy: invalid lock state for waiters"));
|
|
|
|
|
|
|
|
vm_page_lock(m);
|
|
|
|
if (!atomic_cmpset_int(&m->busy_lock, x, VPB_UNBUSIED)) {
|
|
|
|
vm_page_unlock(m);
|
|
|
|
continue;
|
|
|
|
}
|
2001-07-04 20:15:18 +00:00
|
|
|
wakeup(m);
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_unlock(m);
|
|
|
|
break;
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2013-08-09 11:11:11 +00:00
|
|
|
* vm_page_busy_sleep:
|
2001-07-04 20:15:18 +00:00
|
|
|
*
|
2013-08-09 11:11:11 +00:00
|
|
|
* Sleep and release the page lock, using the page pointer as wchan.
|
|
|
|
* This is used to implement the hard-path of busying mechanism.
|
2001-07-04 20:15:18 +00:00
|
|
|
*
|
2013-08-09 11:11:11 +00:00
|
|
|
* The given page must be locked.
|
2001-07-04 20:15:18 +00:00
|
|
|
*/
|
|
|
|
void
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_busy_sleep(vm_page_t m, const char *wmesg)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2013-08-09 11:11:11 +00:00
|
|
|
u_int x;
|
2004-10-24 23:53:47 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
|
|
|
|
|
|
|
x = m->busy_lock;
|
|
|
|
if (x == VPB_UNBUSIED) {
|
|
|
|
vm_page_unlock(m);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if ((x & VPB_BIT_WAITERS) == 0 &&
|
|
|
|
!atomic_cmpset_int(&m->busy_lock, x, x | VPB_BIT_WAITERS)) {
|
|
|
|
vm_page_unlock(m);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
msleep(m, vm_page_lockptr(m), PVM | PDROP, wmesg, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_trysbusy:
|
|
|
|
*
|
|
|
|
* Try to shared busy a page.
|
|
|
|
* If the operation succeeds 1 is returned otherwise 0.
|
|
|
|
* The operation never sleeps.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_page_trysbusy(vm_page_t m)
|
|
|
|
{
|
|
|
|
u_int x;
|
|
|
|
|
2013-09-05 12:54:40 +00:00
|
|
|
for (;;) {
|
|
|
|
x = m->busy_lock;
|
|
|
|
if ((x & VPB_BIT_SHARED) == 0)
|
|
|
|
return (0);
|
|
|
|
if (atomic_cmpset_acq_int(&m->busy_lock, x, x + VPB_ONE_SHARER))
|
|
|
|
return (1);
|
|
|
|
}
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
/*
|
|
|
|
* vm_page_xunbusy_hard:
|
|
|
|
*
|
|
|
|
* Called after the first try the exclusive unbusy of a page failed.
|
|
|
|
* It is assumed that the waiters bit is on.
|
|
|
|
*/
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_xunbusy_hard(vm_page_t m)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-07-31 07:27:08 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_assert_xbusied(m);
|
|
|
|
|
|
|
|
vm_page_lock(m);
|
|
|
|
atomic_store_rel_int(&m->busy_lock, VPB_UNBUSIED);
|
|
|
|
wakeup(m);
|
|
|
|
vm_page_unlock(m);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
/*
|
|
|
|
* vm_page_flash:
|
|
|
|
*
|
|
|
|
* Wakeup anyone waiting for the page.
|
|
|
|
* The ownership bits do not change.
|
|
|
|
*
|
|
|
|
* The given page must be locked.
|
|
|
|
*/
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_flash(vm_page_t m)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2013-08-09 11:11:11 +00:00
|
|
|
u_int x;
|
2002-08-01 17:57:42 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
x = m->busy_lock;
|
|
|
|
if ((x & VPB_BIT_WAITERS) == 0)
|
|
|
|
return;
|
|
|
|
if (atomic_cmpset_int(&m->busy_lock, x,
|
|
|
|
x & (~VPB_BIT_WAITERS)))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
wakeup(m);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Keep page from being freed by the page daemon
|
|
|
|
* much of the same effect as wiring, except much lower
|
|
|
|
* overhead and should be used only for *very* temporary
|
|
|
|
* holding ("wiring").
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_hold(vm_page_t mem)
|
|
|
|
{
|
2003-01-20 09:24:03 +00:00
|
|
|
|
2010-04-30 00:46:43 +00:00
|
|
|
vm_page_lock_assert(mem, MA_OWNED);
|
2001-07-04 20:15:18 +00:00
|
|
|
mem->hold_count++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_page_unhold(vm_page_t mem)
|
|
|
|
{
|
2002-12-15 00:06:02 +00:00
|
|
|
|
2010-04-30 00:46:43 +00:00
|
|
|
vm_page_lock_assert(mem, MA_OWNED);
|
2013-09-16 06:25:54 +00:00
|
|
|
KASSERT(mem->hold_count >= 1, ("vm_page_unhold: hold count < 0!!!"));
|
2001-07-04 20:15:18 +00:00
|
|
|
--mem->hold_count;
|
Replace the page hold queue, PQ_HOLD, by a new page flag, PG_UNHOLDFREE,
because the queue itself serves no purpose. When a held page is freed,
inserting the page into the hold queue has the side effect of setting the
page's "queue" field to PQ_HOLD. Later, when the page is unheld, it will
be freed because the "queue" field is PQ_HOLD. In other words, PQ_HOLD is
used as a flag, not a queue. So, this change replaces it with a flag.
To accomodate the new page flag, make the page's "flags" field wider and
"oflags" field narrower.
Reviewed by: kib
2012-10-29 06:15:04 +00:00
|
|
|
if (mem->hold_count == 0 && (mem->flags & PG_UNHOLDFREE) != 0)
|
2002-02-19 23:19:30 +00:00
|
|
|
vm_page_free_toq(mem);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2010-12-17 22:41:22 +00:00
|
|
|
/*
|
|
|
|
* vm_page_unhold_pages:
|
|
|
|
*
|
|
|
|
* Unhold each of the pages that is referenced by the given array.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_unhold_pages(vm_page_t *ma, int count)
|
|
|
|
{
|
|
|
|
struct mtx *mtx, *new_mtx;
|
|
|
|
|
|
|
|
mtx = NULL;
|
|
|
|
for (; count != 0; count--) {
|
|
|
|
/*
|
|
|
|
* Avoid releasing and reacquiring the same page lock.
|
|
|
|
*/
|
|
|
|
new_mtx = vm_page_lockptr(*ma);
|
|
|
|
if (mtx != new_mtx) {
|
|
|
|
if (mtx != NULL)
|
|
|
|
mtx_unlock(mtx);
|
|
|
|
mtx = new_mtx;
|
|
|
|
mtx_lock(mtx);
|
|
|
|
}
|
|
|
|
vm_page_unhold(*ma);
|
|
|
|
ma++;
|
|
|
|
}
|
|
|
|
if (mtx != NULL)
|
|
|
|
mtx_unlock(mtx);
|
|
|
|
}
|
|
|
|
|
2012-05-12 20:42:56 +00:00
|
|
|
vm_page_t
|
|
|
|
PHYS_TO_VM_PAGE(vm_paddr_t pa)
|
|
|
|
{
|
|
|
|
vm_page_t m;
|
|
|
|
|
|
|
|
#ifdef VM_PHYSSEG_SPARSE
|
|
|
|
m = vm_phys_paddr_to_vm_page(pa);
|
|
|
|
if (m == NULL)
|
|
|
|
m = vm_phys_fictitious_to_vm_page(pa);
|
|
|
|
return (m);
|
|
|
|
#elif defined(VM_PHYSSEG_DENSE)
|
|
|
|
long pi;
|
|
|
|
|
|
|
|
pi = atop(pa);
|
2012-05-22 07:04:23 +00:00
|
|
|
if (pi >= first_page && (pi - first_page) < vm_page_array_size) {
|
2012-05-12 20:42:56 +00:00
|
|
|
m = &vm_page_array[pi - first_page];
|
|
|
|
return (m);
|
|
|
|
}
|
|
|
|
return (vm_phys_fictitious_to_vm_page(pa));
|
|
|
|
#else
|
|
|
|
#error "Either VM_PHYSSEG_DENSE or VM_PHYSSEG_SPARSE must be defined."
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2011-03-11 07:07:48 +00:00
|
|
|
/*
|
|
|
|
* vm_page_getfake:
|
|
|
|
*
|
|
|
|
* Create a fictitious page with the specified physical address and
|
|
|
|
* memory attribute. The memory attribute is the only the machine-
|
|
|
|
* dependent aspect of a fictitious page that must be initialized.
|
|
|
|
*/
|
|
|
|
vm_page_t
|
|
|
|
vm_page_getfake(vm_paddr_t paddr, vm_memattr_t memattr)
|
|
|
|
{
|
|
|
|
vm_page_t m;
|
|
|
|
|
|
|
|
m = uma_zalloc(fakepg_zone, M_WAITOK | M_ZERO);
|
2012-05-12 20:34:22 +00:00
|
|
|
vm_page_initfake(m, paddr, memattr);
|
|
|
|
return (m);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_page_initfake(vm_page_t m, vm_paddr_t paddr, vm_memattr_t memattr)
|
|
|
|
{
|
|
|
|
|
|
|
|
if ((m->flags & PG_FICTITIOUS) != 0) {
|
|
|
|
/*
|
|
|
|
* The page's memattr might have changed since the
|
|
|
|
* previous initialization. Update the pmap to the
|
|
|
|
* new memattr.
|
|
|
|
*/
|
|
|
|
goto memattr;
|
|
|
|
}
|
2011-03-11 07:07:48 +00:00
|
|
|
m->phys_addr = paddr;
|
|
|
|
m->queue = PQ_NONE;
|
|
|
|
/* Fictitious pages don't use "segind". */
|
|
|
|
m->flags = PG_FICTITIOUS;
|
|
|
|
/* Fictitious pages don't use "order" or "pool". */
|
2013-08-09 11:11:11 +00:00
|
|
|
m->oflags = VPO_UNMANAGED;
|
|
|
|
m->busy_lock = VPB_SINGLE_EXCLUSIVER;
|
2011-03-11 07:07:48 +00:00
|
|
|
m->wire_count = 1;
|
2013-07-03 23:38:37 +00:00
|
|
|
pmap_page_init(m);
|
2012-05-12 20:34:22 +00:00
|
|
|
memattr:
|
2011-03-11 07:07:48 +00:00
|
|
|
pmap_page_set_memattr(m, memattr);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_putfake:
|
|
|
|
*
|
|
|
|
* Release a fictitious page.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_putfake(vm_page_t m)
|
|
|
|
{
|
|
|
|
|
2012-05-12 20:27:51 +00:00
|
|
|
KASSERT((m->oflags & VPO_UNMANAGED) != 0, ("managed %p", m));
|
2011-03-11 07:07:48 +00:00
|
|
|
KASSERT((m->flags & PG_FICTITIOUS) != 0,
|
|
|
|
("vm_page_putfake: bad page %p", m));
|
|
|
|
uma_zfree(fakepg_zone, m);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_updatefake:
|
|
|
|
*
|
|
|
|
* Update the given fictitious page to the specified physical address and
|
|
|
|
* memory attribute.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_updatefake(vm_page_t m, vm_paddr_t paddr, vm_memattr_t memattr)
|
|
|
|
{
|
|
|
|
|
|
|
|
KASSERT((m->flags & PG_FICTITIOUS) != 0,
|
|
|
|
("vm_page_updatefake: bad page %p", m));
|
|
|
|
m->phys_addr = paddr;
|
|
|
|
pmap_page_set_memattr(m, memattr);
|
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
/*
|
|
|
|
* vm_page_free:
|
|
|
|
*
|
2007-02-17 19:37:00 +00:00
|
|
|
* Free a page.
|
2001-07-04 20:15:18 +00:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_free(vm_page_t m)
|
|
|
|
{
|
2007-02-18 05:54:42 +00:00
|
|
|
|
|
|
|
m->flags &= ~PG_ZERO;
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_free_toq(m);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_free_zero:
|
|
|
|
*
|
|
|
|
* Free a page to the zerod-pages queue
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_free_zero(vm_page_t m)
|
|
|
|
{
|
2007-02-18 05:54:42 +00:00
|
|
|
|
|
|
|
m->flags |= PG_ZERO;
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_free_toq(m);
|
|
|
|
}
|
|
|
|
|
2012-08-04 18:16:43 +00:00
|
|
|
/*
|
|
|
|
* Unbusy and handle the page queueing for a page from the VOP_GETPAGES()
|
|
|
|
* array which is not the request page.
|
|
|
|
*/
|
|
|
|
void
|
2012-08-14 11:45:47 +00:00
|
|
|
vm_page_readahead_finish(vm_page_t m)
|
2012-08-04 18:16:43 +00:00
|
|
|
{
|
|
|
|
|
2012-08-14 11:45:47 +00:00
|
|
|
if (m->valid != 0) {
|
2012-08-04 18:16:43 +00:00
|
|
|
/*
|
|
|
|
* Since the page is not the requested page, whether
|
|
|
|
* it should be activated or deactivated is not
|
|
|
|
* obvious. Empirical results have shown that
|
|
|
|
* deactivating the page is usually the best choice,
|
|
|
|
* unless the page is wanted by another thread.
|
|
|
|
*/
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_lock(m);
|
|
|
|
if ((m->busy_lock & VPB_BIT_WAITERS) != 0)
|
2012-08-04 18:16:43 +00:00
|
|
|
vm_page_activate(m);
|
2013-08-09 11:11:11 +00:00
|
|
|
else
|
2012-08-04 18:16:43 +00:00
|
|
|
vm_page_deactivate(m);
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_unlock(m);
|
|
|
|
vm_page_xunbusy(m);
|
2012-08-04 18:16:43 +00:00
|
|
|
} else {
|
2012-08-14 11:45:47 +00:00
|
|
|
/*
|
|
|
|
* Free the completely invalid page. Such page state
|
|
|
|
* occurs due to the short read operation which did
|
|
|
|
* not covered our page at all, or in case when a read
|
|
|
|
* error happens.
|
|
|
|
*/
|
2012-08-04 18:16:43 +00:00
|
|
|
vm_page_lock(m);
|
|
|
|
vm_page_free(m);
|
|
|
|
vm_page_unlock(m);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-07-29 19:41:22 +00:00
|
|
|
/*
|
2013-08-09 11:11:11 +00:00
|
|
|
* vm_page_sleep_if_busy:
|
2002-07-29 19:41:22 +00:00
|
|
|
*
|
2013-08-09 11:11:11 +00:00
|
|
|
* Sleep and release the page queues lock if the page is busied.
|
|
|
|
* Returns TRUE if the thread slept.
|
2006-08-27 19:50:13 +00:00
|
|
|
*
|
2013-08-09 11:11:11 +00:00
|
|
|
* The given page must be unlocked and object containing it must
|
|
|
|
* be locked.
|
2002-07-29 19:41:22 +00:00
|
|
|
*/
|
2013-08-09 11:11:11 +00:00
|
|
|
int
|
|
|
|
vm_page_sleep_if_busy(vm_page_t m, const char *msg)
|
2002-07-29 19:41:22 +00:00
|
|
|
{
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_object_t obj;
|
2002-07-29 19:41:22 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_lock_assert(m, MA_NOTOWNED);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2006-08-27 19:50:13 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
if (vm_page_busied(m)) {
|
|
|
|
/*
|
|
|
|
* The page-specific object must be cached because page
|
|
|
|
* identity can change during the sleep, causing the
|
|
|
|
* re-lock of a different object.
|
|
|
|
* It is assumed that a reference to the object is already
|
|
|
|
* held by the callers.
|
|
|
|
*/
|
|
|
|
obj = m->object;
|
|
|
|
vm_page_lock(m);
|
|
|
|
VM_OBJECT_WUNLOCK(obj);
|
|
|
|
vm_page_busy_sleep(m, msg);
|
|
|
|
VM_OBJECT_WLOCK(obj);
|
|
|
|
return (TRUE);
|
|
|
|
}
|
|
|
|
return (FALSE);
|
2002-07-29 19:41:22 +00:00
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
/*
|
2012-06-20 23:25:47 +00:00
|
|
|
* vm_page_dirty_KBI: [ internal use only ]
|
2001-07-04 20:15:18 +00:00
|
|
|
*
|
2011-06-19 19:13:24 +00:00
|
|
|
* Set all bits in the page's dirty field.
|
|
|
|
*
|
2011-09-28 14:57:50 +00:00
|
|
|
* The object containing the specified page must be locked if the
|
|
|
|
* call is made from the machine-independent layer.
|
|
|
|
*
|
2011-06-19 19:13:24 +00:00
|
|
|
* See vm_page_clear_dirty_mask().
|
2012-06-20 23:25:47 +00:00
|
|
|
*
|
|
|
|
* This function should only be called by vm_page_dirty().
|
2001-07-04 20:15:18 +00:00
|
|
|
*/
|
|
|
|
void
|
2012-06-20 23:25:47 +00:00
|
|
|
vm_page_dirty_KBI(vm_page_t m)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2009-05-30 22:06:58 +00:00
|
|
|
|
2012-06-20 23:25:47 +00:00
|
|
|
/* These assertions refer to this operation by its public name. */
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
KASSERT((m->flags & PG_CACHED) == 0,
|
2001-07-04 20:15:18 +00:00
|
|
|
("vm_page_dirty: page in cache!"));
|
2009-05-30 22:06:58 +00:00
|
|
|
KASSERT(m->valid == VM_PAGE_BITS_ALL,
|
|
|
|
("vm_page_dirty: page is invalid!"));
|
2001-07-04 20:15:18 +00:00
|
|
|
m->dirty = VM_PAGE_BITS_ALL;
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_page_insert: [ internal use only ]
|
|
|
|
*
|
1998-12-23 01:52:47 +00:00
|
|
|
* Inserts the given mem entry into the object and object list.
|
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* The object must be locked.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2013-08-09 11:28:55 +00:00
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_insert(vm_page_t m, vm_object_t object, vm_pindex_t pindex)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2013-05-12 16:50:18 +00:00
|
|
|
vm_page_t mpred;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2013-05-12 16:50:18 +00:00
|
|
|
mpred = vm_radix_lookup_le(&object->rtree, pindex);
|
2013-08-09 11:28:55 +00:00
|
|
|
return (vm_page_insert_after(m, object, pindex, mpred));
|
2013-05-12 16:50:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_insert_after:
|
|
|
|
*
|
|
|
|
* Inserts the page "m" into the specified object at offset "pindex".
|
|
|
|
*
|
|
|
|
* The page "mpred" must immediately precede the offset "pindex" within
|
|
|
|
* the specified object.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
2013-08-09 11:28:55 +00:00
|
|
|
static int
|
2013-05-12 16:50:18 +00:00
|
|
|
vm_page_insert_after(vm_page_t m, vm_object_t object, vm_pindex_t pindex,
|
|
|
|
vm_page_t mpred)
|
|
|
|
{
|
2013-08-09 11:28:55 +00:00
|
|
|
vm_pindex_t sidx;
|
|
|
|
vm_object_t sobj;
|
2013-05-12 16:50:18 +00:00
|
|
|
vm_page_t msucc;
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
KASSERT(m->object == NULL,
|
|
|
|
("vm_page_insert_after: page already inserted"));
|
|
|
|
if (mpred != NULL) {
|
2013-09-17 07:35:26 +00:00
|
|
|
KASSERT(mpred->object == object,
|
2013-05-12 16:50:18 +00:00
|
|
|
("vm_page_insert_after: object doesn't contain mpred"));
|
|
|
|
KASSERT(mpred->pindex < pindex,
|
|
|
|
("vm_page_insert_after: mpred doesn't precede pindex"));
|
|
|
|
msucc = TAILQ_NEXT(mpred, listq);
|
|
|
|
} else
|
|
|
|
msucc = TAILQ_FIRST(&object->memq);
|
|
|
|
if (msucc != NULL)
|
|
|
|
KASSERT(msucc->pindex > pindex,
|
|
|
|
("vm_page_insert_after: msucc doesn't succeed pindex"));
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Record the object/offset pair in this page
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2013-08-09 11:28:55 +00:00
|
|
|
sobj = m->object;
|
|
|
|
sidx = m->pindex;
|
1996-01-19 04:00:31 +00:00
|
|
|
m->object = object;
|
|
|
|
m->pindex = pindex;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
2002-10-18 17:24:30 +00:00
|
|
|
* Now link into the object's ordered list of backed pages.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2013-08-09 11:28:55 +00:00
|
|
|
if (vm_radix_insert(&object->rtree, m)) {
|
|
|
|
m->object = sobj;
|
|
|
|
m->pindex = sidx;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
vm_page_insert_radixdone(m, object, mpred);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_insert_radixdone:
|
|
|
|
*
|
|
|
|
* Complete page "m" insertion into the specified object after the
|
|
|
|
* radix trie hooking.
|
|
|
|
*
|
|
|
|
* The page "mpred" must precede the offset "m->pindex" within the
|
|
|
|
* specified object.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vm_page_insert_radixdone(vm_page_t m, vm_object_t object, vm_page_t mpred)
|
|
|
|
{
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
KASSERT(object != NULL && m->object == object,
|
|
|
|
("vm_page_insert_radixdone: page %p has inconsistent object", m));
|
|
|
|
if (mpred != NULL) {
|
2013-09-17 07:35:26 +00:00
|
|
|
KASSERT(mpred->object == object,
|
2013-08-09 11:28:55 +00:00
|
|
|
("vm_page_insert_after: object doesn't contain mpred"));
|
|
|
|
KASSERT(mpred->pindex < m->pindex,
|
|
|
|
("vm_page_insert_after: mpred doesn't precede pindex"));
|
|
|
|
}
|
|
|
|
|
2013-05-12 16:50:18 +00:00
|
|
|
if (mpred != NULL)
|
|
|
|
TAILQ_INSERT_AFTER(&object->memq, mpred, m, listq);
|
|
|
|
else
|
|
|
|
TAILQ_INSERT_HEAD(&object->memq, m, listq);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
2012-10-03 05:06:45 +00:00
|
|
|
* Show that the object has one more resident page.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
object->resident_page_count++;
|
2012-10-03 05:06:45 +00:00
|
|
|
|
2005-03-15 14:14:09 +00:00
|
|
|
/*
|
|
|
|
* Hold the vnode until the last page is released.
|
|
|
|
*/
|
|
|
|
if (object->resident_page_count == 1 && object->type == OBJT_VNODE)
|
2012-10-03 05:06:45 +00:00
|
|
|
vhold(object->handle);
|
1999-02-24 21:26:26 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Since we are inserting a new and possibly dirty page,
|
2006-07-21 06:40:29 +00:00
|
|
|
* update the object's OBJ_MIGHTBEDIRTY flag.
|
1999-02-24 21:26:26 +00:00
|
|
|
*/
|
2012-06-16 18:56:19 +00:00
|
|
|
if (pmap_page_is_write_mapped(m))
|
2001-10-26 00:08:05 +00:00
|
|
|
vm_object_set_writeable_dirty(object);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
1999-01-21 08:29:12 +00:00
|
|
|
* vm_page_remove:
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
* Removes the given mem entry from the object/offset-page
|
1999-01-21 08:29:12 +00:00
|
|
|
* table and the object page list, but do not invalidate/terminate
|
|
|
|
* the backing store.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* The object must be locked. The page must be locked if it is managed.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-02-15 06:52:14 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_remove(vm_page_t m)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-02-05 03:32:49 +00:00
|
|
|
vm_object_t object;
|
2013-08-09 11:11:11 +00:00
|
|
|
boolean_t lockacq;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2011-08-09 21:01:36 +00:00
|
|
|
if ((m->oflags & VPO_UNMANAGED) == 0)
|
2010-05-05 18:16:06 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
2004-11-03 20:17:31 +00:00
|
|
|
if ((object = m->object) == NULL)
|
1999-02-15 06:52:14 +00:00
|
|
|
return;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (vm_page_xbusied(m)) {
|
|
|
|
lockacq = FALSE;
|
|
|
|
if ((m->oflags & VPO_UNMANAGED) != 0 &&
|
|
|
|
!mtx_owned(vm_page_lockptr(m))) {
|
|
|
|
lockacq = TRUE;
|
|
|
|
vm_page_lock(m);
|
|
|
|
}
|
2004-11-03 20:17:31 +00:00
|
|
|
vm_page_flash(m);
|
2013-08-09 11:11:11 +00:00
|
|
|
atomic_store_rel_int(&m->busy_lock, VPB_UNBUSIED);
|
|
|
|
if (lockacq)
|
|
|
|
vm_page_unlock(m);
|
1998-01-31 11:56:53 +00:00
|
|
|
}
|
1999-01-21 08:29:12 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Now remove from the object's list of backed pages.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
vm_radix_remove(&object->rtree, m->pindex);
|
1998-02-05 03:32:49 +00:00
|
|
|
TAILQ_REMOVE(&object->memq, m, listq);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* And show that the object has one fewer resident page.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1998-02-05 03:32:49 +00:00
|
|
|
object->resident_page_count--;
|
2012-10-03 05:06:45 +00:00
|
|
|
|
2005-03-15 14:14:09 +00:00
|
|
|
/*
|
|
|
|
* The vnode may now be recycled.
|
|
|
|
*/
|
|
|
|
if (object->resident_page_count == 0 && object->type == OBJT_VNODE)
|
2012-10-03 05:06:45 +00:00
|
|
|
vdrop(object->handle);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1998-10-21 14:46:42 +00:00
|
|
|
m->object = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_lookup:
|
|
|
|
*
|
|
|
|
* Returns the page associated with the object/offset
|
|
|
|
* pair specified; if none is found, NULL is returned.
|
|
|
|
*
|
2002-10-18 17:24:30 +00:00
|
|
|
* The object must be locked.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
vm_page_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_lookup(vm_object_t object, vm_pindex_t pindex)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
|
2013-05-17 18:49:43 +00:00
|
|
|
VM_OBJECT_ASSERT_LOCKED(object);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
return (vm_radix_lookup(&object->rtree, pindex));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2010-07-04 11:13:33 +00:00
|
|
|
/*
|
|
|
|
* vm_page_find_least:
|
|
|
|
*
|
|
|
|
* Returns the page associated with the object with least pindex
|
|
|
|
* greater than or equal to the parameter pindex, or NULL.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
vm_page_t
|
|
|
|
vm_page_find_least(vm_object_t object, vm_pindex_t pindex)
|
|
|
|
{
|
|
|
|
vm_page_t m;
|
|
|
|
|
2013-05-21 20:38:19 +00:00
|
|
|
VM_OBJECT_ASSERT_LOCKED(object);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
if ((m = TAILQ_FIRST(&object->memq)) != NULL && m->pindex < pindex)
|
|
|
|
m = vm_radix_lookup_ge(&object->rtree, pindex);
|
2010-07-04 11:13:33 +00:00
|
|
|
return (m);
|
|
|
|
}
|
|
|
|
|
2010-06-21 23:27:24 +00:00
|
|
|
/*
|
|
|
|
* Returns the given page's successor (by pindex) within the object if it is
|
|
|
|
* resident; if none is found, NULL is returned.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
vm_page_t
|
|
|
|
vm_page_next(vm_page_t m)
|
|
|
|
{
|
|
|
|
vm_page_t next;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2010-06-21 23:27:24 +00:00
|
|
|
if ((next = TAILQ_NEXT(m, listq)) != NULL &&
|
|
|
|
next->pindex != m->pindex + 1)
|
|
|
|
next = NULL;
|
|
|
|
return (next);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns the given page's predecessor (by pindex) within the object if it is
|
|
|
|
* resident; if none is found, NULL is returned.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
vm_page_t
|
|
|
|
vm_page_prev(vm_page_t m)
|
|
|
|
{
|
|
|
|
vm_page_t prev;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2010-06-21 23:27:24 +00:00
|
|
|
if ((prev = TAILQ_PREV(m, pglist, listq)) != NULL &&
|
|
|
|
prev->pindex != m->pindex - 1)
|
|
|
|
prev = NULL;
|
|
|
|
return (prev);
|
|
|
|
}
|
|
|
|
|
2013-08-09 11:28:55 +00:00
|
|
|
/*
|
|
|
|
* Uses the page mnew as a replacement for an existing page at index
|
|
|
|
* pindex which must be already present in the object.
|
2013-08-09 21:14:55 +00:00
|
|
|
*
|
|
|
|
* The existing page must not be on a paging queue.
|
2013-08-09 11:28:55 +00:00
|
|
|
*/
|
|
|
|
vm_page_t
|
|
|
|
vm_page_replace(vm_page_t mnew, vm_object_t object, vm_pindex_t pindex)
|
|
|
|
{
|
|
|
|
vm_page_t mold, mpred;
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function mostly follows vm_page_insert() and
|
|
|
|
* vm_page_remove() without the radix, object count and vnode
|
|
|
|
* dance. Double check such functions for more comments.
|
|
|
|
*/
|
|
|
|
mpred = vm_radix_lookup(&object->rtree, pindex);
|
|
|
|
KASSERT(mpred != NULL,
|
|
|
|
("vm_page_replace: replacing page not present with pindex"));
|
|
|
|
mpred = TAILQ_PREV(mpred, respgs, listq);
|
|
|
|
if (mpred != NULL)
|
|
|
|
KASSERT(mpred->pindex < pindex,
|
|
|
|
("vm_page_insert_after: mpred doesn't precede pindex"));
|
|
|
|
|
|
|
|
mnew->object = object;
|
|
|
|
mnew->pindex = pindex;
|
2013-12-08 20:07:02 +00:00
|
|
|
mold = vm_radix_replace(&object->rtree, mnew);
|
2013-08-09 21:14:55 +00:00
|
|
|
KASSERT(mold->queue == PQ_NONE,
|
|
|
|
("vm_page_replace: mold is on a paging queue"));
|
2013-08-09 11:28:55 +00:00
|
|
|
|
|
|
|
/* Detach the old page from the resident tailq. */
|
|
|
|
TAILQ_REMOVE(&object->memq, mold, listq);
|
2013-08-09 21:14:55 +00:00
|
|
|
|
2013-08-09 11:28:55 +00:00
|
|
|
mold->object = NULL;
|
2013-08-09 21:14:55 +00:00
|
|
|
vm_page_xunbusy(mold);
|
2013-08-09 11:28:55 +00:00
|
|
|
|
|
|
|
/* Insert the new page in the resident tailq. */
|
|
|
|
if (mpred != NULL)
|
|
|
|
TAILQ_INSERT_AFTER(&object->memq, mpred, mnew, listq);
|
|
|
|
else
|
|
|
|
TAILQ_INSERT_HEAD(&object->memq, mnew, listq);
|
|
|
|
if (pmap_page_is_write_mapped(mnew))
|
|
|
|
vm_object_set_writeable_dirty(object);
|
|
|
|
return (mold);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_page_rename:
|
|
|
|
*
|
|
|
|
* Move the given memory entry from its
|
|
|
|
* current object to the specified target object/offset.
|
|
|
|
*
|
1999-01-21 08:29:12 +00:00
|
|
|
* Note: swap associated with the page must be invalidated by the move. We
|
|
|
|
* have to do this for several reasons: (1) we aren't freeing the
|
|
|
|
* page, (2) we are dirtying the page, (3) the VM system is probably
|
|
|
|
* moving the page from object A to B, and will then later move
|
|
|
|
* the backing store from A to B and we can't have a conflict.
|
|
|
|
*
|
|
|
|
* Note: we *always* dirty the page. It is necessary both for the
|
|
|
|
* fact that we moved it, and because we may be invalidating
|
1999-01-24 06:00:31 +00:00
|
|
|
* swap. If the page is on the cache, we have to deactivate it
|
|
|
|
* or vm_page_dirty() will panic. Dirty pages are not allowed
|
|
|
|
* on the cache.
|
2012-10-03 05:06:45 +00:00
|
|
|
*
|
2013-08-09 11:28:55 +00:00
|
|
|
* The objects must be locked.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2013-08-09 11:28:55 +00:00
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_rename(vm_page_t m, vm_object_t new_object, vm_pindex_t new_pindex)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2013-08-09 11:28:55 +00:00
|
|
|
vm_page_t mpred;
|
|
|
|
vm_pindex_t opidx;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2013-08-09 11:28:55 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(new_object);
|
|
|
|
|
|
|
|
mpred = vm_radix_lookup_le(&new_object->rtree, new_pindex);
|
|
|
|
KASSERT(mpred == NULL || mpred->pindex != new_pindex,
|
|
|
|
("vm_page_rename: pindex already renamed"));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a custom version of vm_page_insert() which does not depend
|
|
|
|
* by m_prev and can cheat on the implementation aspects of the
|
|
|
|
* function.
|
|
|
|
*/
|
|
|
|
opidx = m->pindex;
|
|
|
|
m->pindex = new_pindex;
|
|
|
|
if (vm_radix_insert(&new_object->rtree, m)) {
|
|
|
|
m->pindex = opidx;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The operation cannot fail anymore. The removal must happen before
|
|
|
|
* the listq iterator is tainted.
|
|
|
|
*/
|
|
|
|
m->pindex = opidx;
|
|
|
|
vm_page_lock(m);
|
1996-01-19 04:00:31 +00:00
|
|
|
vm_page_remove(m);
|
2013-08-09 11:28:55 +00:00
|
|
|
|
|
|
|
/* Return back to the new pindex to complete vm_page_insert(). */
|
|
|
|
m->pindex = new_pindex;
|
|
|
|
m->object = new_object;
|
|
|
|
vm_page_unlock(m);
|
|
|
|
vm_page_insert_radixdone(m, new_object, mpred);
|
1999-01-24 06:00:31 +00:00
|
|
|
vm_page_dirty(m);
|
2013-08-09 11:28:55 +00:00
|
|
|
return (0);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
This mega-commit is meant to fix numerous interrelated problems. There
has been some bitrot and incorrect assumptions in the vfs_bio code. These
problems have manifest themselves worse on NFS type filesystems, but can
still affect local filesystems under certain circumstances. Most of
the problems have involved mmap consistancy, and as a side-effect broke
the vfs.ioopt code. This code might have been committed seperately, but
almost everything is interrelated.
1) Allow (pmap_object_init_pt) prefaulting of buffer-busy pages that
are fully valid.
2) Rather than deactivating erroneously read initial (header) pages in
kern_exec, we now free them.
3) Fix the rundown of non-VMIO buffers that are in an inconsistent
(missing vp) state.
4) Fix the disassociation of pages from buffers in brelse. The previous
code had rotted and was faulty in a couple of important circumstances.
5) Remove a gratuitious buffer wakeup in vfs_vmio_release.
6) Remove a crufty and currently unused cluster mechanism for VBLK
files in vfs_bio_awrite. When the code is functional, I'll add back
a cleaner version.
7) The page busy count wakeups assocated with the buffer cache usage were
incorrectly cleaned up in a previous commit by me. Revert to the
original, correct version, but with a cleaner implementation.
8) The cluster read code now tries to keep data associated with buffers
more aggressively (without breaking the heuristics) when it is presumed
that the read data (buffers) will be soon needed.
9) Change to filesystem lockmgr locks so that they use LK_NOPAUSE. The
delay loop waiting is not useful for filesystem locks, due to the
length of the time intervals.
10) Correct and clean-up spec_getpages.
11) Implement a fully functional nfs_getpages, nfs_putpages.
12) Fix nfs_write so that modifications are coherent with the NFS data on
the server disk (at least as well as NFS seems to allow.)
13) Properly support MS_INVALIDATE on NFS.
14) Properly pass down MS_INVALIDATE to lower levels of the VM code from
vm_map_clean.
15) Better support the notion of pages being busy but valid, so that
fewer in-transit waits occur. (use p->busy more for pageouts instead
of PG_BUSY.) Since the page is fully valid, it is still usable for
reads.
16) It is possible (in error) for cached pages to be busy. Make the
page allocation code handle that case correctly. (It should probably
be a printf or panic, but I want the system to handle coding errors
robustly. I'll probably add a printf.)
17) Correct the design and usage of vm_page_sleep. It didn't handle
consistancy problems very well, so make the design a little less
lofty. After vm_page_sleep, if it ever blocked, it is still important
to relookup the page (if the object generation count changed), and
verify it's status (always.)
18) In vm_pageout.c, vm_pageout_clean had rotted, so clean that up.
19) Push the page busy for writes and VM_PROT_READ into vm_pageout_flush.
20) Fix vm_pager_put_pages and it's descendents to support an int flag
instead of a boolean, so that we can pass down the invalidate bit.
1998-03-07 21:37:31 +00:00
|
|
|
/*
|
2007-09-27 04:21:59 +00:00
|
|
|
* Convert all of the given object's cached pages that have a
|
|
|
|
* pindex within the given range into free pages. If the value
|
|
|
|
* zero is given for "end", then the range's upper bound is
|
|
|
|
* infinity. If the given object is backed by a vnode and it
|
|
|
|
* transitions from having one or more cached pages to none, the
|
|
|
|
* vnode's hold count is reduced.
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
*/
|
|
|
|
void
|
2007-09-27 04:21:59 +00:00
|
|
|
vm_page_cache_free(vm_object_t object, vm_pindex_t start, vm_pindex_t end)
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
{
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
vm_page_t m;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
boolean_t empty;
|
|
|
|
|
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
if (__predict_false(vm_radix_is_empty(&object->cache))) {
|
2007-09-27 04:21:59 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
return;
|
|
|
|
}
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
while ((m = vm_radix_lookup_ge(&object->cache, start)) != NULL) {
|
|
|
|
if (end != 0 && m->pindex >= end)
|
|
|
|
break;
|
|
|
|
vm_radix_remove(&object->cache, m->pindex);
|
2013-08-09 11:28:55 +00:00
|
|
|
vm_page_cache_turn_free(m);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
}
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
empty = vm_radix_is_empty(&object->cache);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
2007-09-27 04:21:59 +00:00
|
|
|
if (object->type == OBJT_VNODE && empty)
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
vdrop(object->handle);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns the cached page that is associated with the given
|
|
|
|
* object and offset. If, however, none exists, returns NULL.
|
1998-12-23 01:52:47 +00:00
|
|
|
*
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
* The free page queue must be locked.
|
|
|
|
*/
|
|
|
|
static inline vm_page_t
|
|
|
|
vm_page_cache_lookup(vm_object_t object, vm_pindex_t pindex)
|
|
|
|
{
|
|
|
|
|
|
|
|
mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
return (vm_radix_lookup(&object->cache, pindex));
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove the given cached page from its containing object's
|
|
|
|
* collection of cached pages.
|
1998-12-23 01:52:47 +00:00
|
|
|
*
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
* The free page queue must be locked.
|
This mega-commit is meant to fix numerous interrelated problems. There
has been some bitrot and incorrect assumptions in the vfs_bio code. These
problems have manifest themselves worse on NFS type filesystems, but can
still affect local filesystems under certain circumstances. Most of
the problems have involved mmap consistancy, and as a side-effect broke
the vfs.ioopt code. This code might have been committed seperately, but
almost everything is interrelated.
1) Allow (pmap_object_init_pt) prefaulting of buffer-busy pages that
are fully valid.
2) Rather than deactivating erroneously read initial (header) pages in
kern_exec, we now free them.
3) Fix the rundown of non-VMIO buffers that are in an inconsistent
(missing vp) state.
4) Fix the disassociation of pages from buffers in brelse. The previous
code had rotted and was faulty in a couple of important circumstances.
5) Remove a gratuitious buffer wakeup in vfs_vmio_release.
6) Remove a crufty and currently unused cluster mechanism for VBLK
files in vfs_bio_awrite. When the code is functional, I'll add back
a cleaner version.
7) The page busy count wakeups assocated with the buffer cache usage were
incorrectly cleaned up in a previous commit by me. Revert to the
original, correct version, but with a cleaner implementation.
8) The cluster read code now tries to keep data associated with buffers
more aggressively (without breaking the heuristics) when it is presumed
that the read data (buffers) will be soon needed.
9) Change to filesystem lockmgr locks so that they use LK_NOPAUSE. The
delay loop waiting is not useful for filesystem locks, due to the
length of the time intervals.
10) Correct and clean-up spec_getpages.
11) Implement a fully functional nfs_getpages, nfs_putpages.
12) Fix nfs_write so that modifications are coherent with the NFS data on
the server disk (at least as well as NFS seems to allow.)
13) Properly support MS_INVALIDATE on NFS.
14) Properly pass down MS_INVALIDATE to lower levels of the VM code from
vm_map_clean.
15) Better support the notion of pages being busy but valid, so that
fewer in-transit waits occur. (use p->busy more for pageouts instead
of PG_BUSY.) Since the page is fully valid, it is still usable for
reads.
16) It is possible (in error) for cached pages to be busy. Make the
page allocation code handle that case correctly. (It should probably
be a printf or panic, but I want the system to handle coding errors
robustly. I'll probably add a printf.)
17) Correct the design and usage of vm_page_sleep. It didn't handle
consistancy problems very well, so make the design a little less
lofty. After vm_page_sleep, if it ever blocked, it is still important
to relookup the page (if the object generation count changed), and
verify it's status (always.)
18) In vm_pageout.c, vm_pageout_clean had rotted, so clean that up.
19) Push the page busy for writes and VM_PROT_READ into vm_pageout_flush.
20) Fix vm_pager_put_pages and it's descendents to support an int flag
instead of a boolean, so that we can pass down the invalidate bit.
1998-03-07 21:37:31 +00:00
|
|
|
*/
|
2012-04-06 20:34:00 +00:00
|
|
|
static void
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
vm_page_cache_remove(vm_page_t m)
|
This mega-commit is meant to fix numerous interrelated problems. There
has been some bitrot and incorrect assumptions in the vfs_bio code. These
problems have manifest themselves worse on NFS type filesystems, but can
still affect local filesystems under certain circumstances. Most of
the problems have involved mmap consistancy, and as a side-effect broke
the vfs.ioopt code. This code might have been committed seperately, but
almost everything is interrelated.
1) Allow (pmap_object_init_pt) prefaulting of buffer-busy pages that
are fully valid.
2) Rather than deactivating erroneously read initial (header) pages in
kern_exec, we now free them.
3) Fix the rundown of non-VMIO buffers that are in an inconsistent
(missing vp) state.
4) Fix the disassociation of pages from buffers in brelse. The previous
code had rotted and was faulty in a couple of important circumstances.
5) Remove a gratuitious buffer wakeup in vfs_vmio_release.
6) Remove a crufty and currently unused cluster mechanism for VBLK
files in vfs_bio_awrite. When the code is functional, I'll add back
a cleaner version.
7) The page busy count wakeups assocated with the buffer cache usage were
incorrectly cleaned up in a previous commit by me. Revert to the
original, correct version, but with a cleaner implementation.
8) The cluster read code now tries to keep data associated with buffers
more aggressively (without breaking the heuristics) when it is presumed
that the read data (buffers) will be soon needed.
9) Change to filesystem lockmgr locks so that they use LK_NOPAUSE. The
delay loop waiting is not useful for filesystem locks, due to the
length of the time intervals.
10) Correct and clean-up spec_getpages.
11) Implement a fully functional nfs_getpages, nfs_putpages.
12) Fix nfs_write so that modifications are coherent with the NFS data on
the server disk (at least as well as NFS seems to allow.)
13) Properly support MS_INVALIDATE on NFS.
14) Properly pass down MS_INVALIDATE to lower levels of the VM code from
vm_map_clean.
15) Better support the notion of pages being busy but valid, so that
fewer in-transit waits occur. (use p->busy more for pageouts instead
of PG_BUSY.) Since the page is fully valid, it is still usable for
reads.
16) It is possible (in error) for cached pages to be busy. Make the
page allocation code handle that case correctly. (It should probably
be a printf or panic, but I want the system to handle coding errors
robustly. I'll probably add a printf.)
17) Correct the design and usage of vm_page_sleep. It didn't handle
consistancy problems very well, so make the design a little less
lofty. After vm_page_sleep, if it ever blocked, it is still important
to relookup the page (if the object generation count changed), and
verify it's status (always.)
18) In vm_pageout.c, vm_pageout_clean had rotted, so clean that up.
19) Push the page busy for writes and VM_PROT_READ into vm_pageout_flush.
20) Fix vm_pager_put_pages and it's descendents to support an int flag
instead of a boolean, so that we can pass down the invalidate bit.
1998-03-07 21:37:31 +00:00
|
|
|
{
|
|
|
|
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
|
|
|
|
KASSERT((m->flags & PG_CACHED) != 0,
|
|
|
|
("vm_page_cache_remove: page %p is not cached", m));
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
vm_radix_remove(&m->object->cache, m->pindex);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
m->object = NULL;
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_cache_count--;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Transfer all of the cached pages with offset greater than or
|
|
|
|
* equal to 'offidxstart' from the original object's cache to the
|
2007-10-27 00:09:30 +00:00
|
|
|
* new object's cache. However, any cached pages with offset
|
|
|
|
* greater than or equal to the new object's size are kept in the
|
|
|
|
* original object. Initially, the new object's cache must be
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
* empty. Offset 'offidxstart' in the original object must
|
|
|
|
* correspond to offset zero in the new object.
|
|
|
|
*
|
|
|
|
* The new object must be locked.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_cache_transfer(vm_object_t orig_object, vm_pindex_t offidxstart,
|
|
|
|
vm_object_t new_object)
|
|
|
|
{
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
vm_page_t m;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Insertion into an object's collection of cached pages
|
|
|
|
* requires the object to be locked. In contrast, removal does
|
|
|
|
* not.
|
|
|
|
*/
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(new_object);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
KASSERT(vm_radix_is_empty(&new_object->cache),
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
("vm_page_cache_transfer: object %p has cached pages",
|
|
|
|
new_object));
|
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
while ((m = vm_radix_lookup_ge(&orig_object->cache,
|
|
|
|
offidxstart)) != NULL) {
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
/*
|
|
|
|
* Transfer all of the pages with offset greater than or
|
|
|
|
* equal to 'offidxstart' from the original object's
|
|
|
|
* cache to the new object's cache.
|
|
|
|
*/
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
if ((m->pindex - offidxstart) >= new_object->size)
|
|
|
|
break;
|
|
|
|
vm_radix_remove(&orig_object->cache, m->pindex);
|
|
|
|
/* Update the page's object and offset. */
|
|
|
|
m->object = new_object;
|
|
|
|
m->pindex -= offidxstart;
|
2013-08-09 11:28:55 +00:00
|
|
|
if (vm_radix_insert(&new_object->cache, m))
|
|
|
|
vm_page_cache_turn_free(m);
|
This mega-commit is meant to fix numerous interrelated problems. There
has been some bitrot and incorrect assumptions in the vfs_bio code. These
problems have manifest themselves worse on NFS type filesystems, but can
still affect local filesystems under certain circumstances. Most of
the problems have involved mmap consistancy, and as a side-effect broke
the vfs.ioopt code. This code might have been committed seperately, but
almost everything is interrelated.
1) Allow (pmap_object_init_pt) prefaulting of buffer-busy pages that
are fully valid.
2) Rather than deactivating erroneously read initial (header) pages in
kern_exec, we now free them.
3) Fix the rundown of non-VMIO buffers that are in an inconsistent
(missing vp) state.
4) Fix the disassociation of pages from buffers in brelse. The previous
code had rotted and was faulty in a couple of important circumstances.
5) Remove a gratuitious buffer wakeup in vfs_vmio_release.
6) Remove a crufty and currently unused cluster mechanism for VBLK
files in vfs_bio_awrite. When the code is functional, I'll add back
a cleaner version.
7) The page busy count wakeups assocated with the buffer cache usage were
incorrectly cleaned up in a previous commit by me. Revert to the
original, correct version, but with a cleaner implementation.
8) The cluster read code now tries to keep data associated with buffers
more aggressively (without breaking the heuristics) when it is presumed
that the read data (buffers) will be soon needed.
9) Change to filesystem lockmgr locks so that they use LK_NOPAUSE. The
delay loop waiting is not useful for filesystem locks, due to the
length of the time intervals.
10) Correct and clean-up spec_getpages.
11) Implement a fully functional nfs_getpages, nfs_putpages.
12) Fix nfs_write so that modifications are coherent with the NFS data on
the server disk (at least as well as NFS seems to allow.)
13) Properly support MS_INVALIDATE on NFS.
14) Properly pass down MS_INVALIDATE to lower levels of the VM code from
vm_map_clean.
15) Better support the notion of pages being busy but valid, so that
fewer in-transit waits occur. (use p->busy more for pageouts instead
of PG_BUSY.) Since the page is fully valid, it is still usable for
reads.
16) It is possible (in error) for cached pages to be busy. Make the
page allocation code handle that case correctly. (It should probably
be a printf or panic, but I want the system to handle coding errors
robustly. I'll probably add a printf.)
17) Correct the design and usage of vm_page_sleep. It didn't handle
consistancy problems very well, so make the design a little less
lofty. After vm_page_sleep, if it ever blocked, it is still important
to relookup the page (if the object generation count changed), and
verify it's status (always.)
18) In vm_pageout.c, vm_pageout_clean had rotted, so clean that up.
19) Push the page busy for writes and VM_PROT_READ into vm_pageout_flush.
20) Fix vm_pager_put_pages and it's descendents to support an int flag
instead of a boolean, so that we can pass down the invalidate bit.
1998-03-07 21:37:31 +00:00
|
|
|
}
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
This mega-commit is meant to fix numerous interrelated problems. There
has been some bitrot and incorrect assumptions in the vfs_bio code. These
problems have manifest themselves worse on NFS type filesystems, but can
still affect local filesystems under certain circumstances. Most of
the problems have involved mmap consistancy, and as a side-effect broke
the vfs.ioopt code. This code might have been committed seperately, but
almost everything is interrelated.
1) Allow (pmap_object_init_pt) prefaulting of buffer-busy pages that
are fully valid.
2) Rather than deactivating erroneously read initial (header) pages in
kern_exec, we now free them.
3) Fix the rundown of non-VMIO buffers that are in an inconsistent
(missing vp) state.
4) Fix the disassociation of pages from buffers in brelse. The previous
code had rotted and was faulty in a couple of important circumstances.
5) Remove a gratuitious buffer wakeup in vfs_vmio_release.
6) Remove a crufty and currently unused cluster mechanism for VBLK
files in vfs_bio_awrite. When the code is functional, I'll add back
a cleaner version.
7) The page busy count wakeups assocated with the buffer cache usage were
incorrectly cleaned up in a previous commit by me. Revert to the
original, correct version, but with a cleaner implementation.
8) The cluster read code now tries to keep data associated with buffers
more aggressively (without breaking the heuristics) when it is presumed
that the read data (buffers) will be soon needed.
9) Change to filesystem lockmgr locks so that they use LK_NOPAUSE. The
delay loop waiting is not useful for filesystem locks, due to the
length of the time intervals.
10) Correct and clean-up spec_getpages.
11) Implement a fully functional nfs_getpages, nfs_putpages.
12) Fix nfs_write so that modifications are coherent with the NFS data on
the server disk (at least as well as NFS seems to allow.)
13) Properly support MS_INVALIDATE on NFS.
14) Properly pass down MS_INVALIDATE to lower levels of the VM code from
vm_map_clean.
15) Better support the notion of pages being busy but valid, so that
fewer in-transit waits occur. (use p->busy more for pageouts instead
of PG_BUSY.) Since the page is fully valid, it is still usable for
reads.
16) It is possible (in error) for cached pages to be busy. Make the
page allocation code handle that case correctly. (It should probably
be a printf or panic, but I want the system to handle coding errors
robustly. I'll probably add a printf.)
17) Correct the design and usage of vm_page_sleep. It didn't handle
consistancy problems very well, so make the design a little less
lofty. After vm_page_sleep, if it ever blocked, it is still important
to relookup the page (if the object generation count changed), and
verify it's status (always.)
18) In vm_pageout.c, vm_pageout_clean had rotted, so clean that up.
19) Push the page busy for writes and VM_PROT_READ into vm_pageout_flush.
20) Fix vm_pager_put_pages and it's descendents to support an int flag
instead of a boolean, so that we can pass down the invalidate bit.
1998-03-07 21:37:31 +00:00
|
|
|
}
|
|
|
|
|
2012-04-08 18:25:12 +00:00
|
|
|
/*
|
|
|
|
* Returns TRUE if a cached page is associated with the given object and
|
|
|
|
* offset, and FALSE otherwise.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
vm_page_is_cached(vm_object_t object, vm_pindex_t pindex)
|
|
|
|
{
|
|
|
|
vm_page_t m;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Insertion into an object's collection of cached pages requires the
|
|
|
|
* object to be locked. Therefore, if the object is locked and the
|
|
|
|
* object's collection is empty, there is no need to acquire the free
|
|
|
|
* page queues lock in order to prove that the specified page doesn't
|
|
|
|
* exist.
|
|
|
|
*/
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2013-03-09 02:05:29 +00:00
|
|
|
if (__predict_true(vm_object_cache_is_empty(object)))
|
2012-04-08 18:25:12 +00:00
|
|
|
return (FALSE);
|
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
|
|
|
m = vm_page_cache_lookup(object, pindex);
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
return (m != NULL);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_page_alloc:
|
|
|
|
*
|
2011-10-27 17:29:19 +00:00
|
|
|
* Allocate and return a page that is associated with the specified
|
2013-08-09 11:11:11 +00:00
|
|
|
* object and offset pair. By default, this page is exclusive busied.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
2010-07-03 18:25:37 +00:00
|
|
|
* The caller must always specify an allocation class.
|
|
|
|
*
|
|
|
|
* allocation classes:
|
1995-03-01 23:30:04 +00:00
|
|
|
* VM_ALLOC_NORMAL normal process request
|
|
|
|
* VM_ALLOC_SYSTEM system *really* needs a page
|
|
|
|
* VM_ALLOC_INTERRUPT interrupt time request
|
2010-07-03 18:25:37 +00:00
|
|
|
*
|
|
|
|
* optional allocation flags:
|
2011-11-06 02:03:27 +00:00
|
|
|
* VM_ALLOC_COUNT(number) the number of additional pages that the caller
|
|
|
|
* intends to allocate
|
2010-07-03 18:25:37 +00:00
|
|
|
* VM_ALLOC_IFCACHED return page only if it is cached
|
2010-02-27 17:09:28 +00:00
|
|
|
* VM_ALLOC_IFNOTCACHED return NULL, do not reactivate if the page
|
|
|
|
* is cached
|
2013-08-09 11:11:11 +00:00
|
|
|
* VM_ALLOC_NOBUSY do not exclusive busy the page
|
2012-01-27 20:18:31 +00:00
|
|
|
* VM_ALLOC_NODUMP do not include the page in a kernel core dump
|
2011-10-27 17:29:19 +00:00
|
|
|
* VM_ALLOC_NOOBJ page is not associated with an object and
|
2013-08-09 11:11:11 +00:00
|
|
|
* should not be exclusive busy
|
|
|
|
* VM_ALLOC_SBUSY shared busy the allocated page
|
2011-10-27 17:29:19 +00:00
|
|
|
* VM_ALLOC_WIRED wire the allocated page
|
|
|
|
* VM_ALLOC_ZERO prefer a zeroed page
|
1995-01-24 10:14:09 +00:00
|
|
|
*
|
2010-02-27 17:09:28 +00:00
|
|
|
* This routine may not sleep.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
vm_page_t
|
2002-07-18 04:08:10 +00:00
|
|
|
vm_page_alloc(vm_object_t object, vm_pindex_t pindex, int req)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
struct vnode *vp = NULL;
|
|
|
|
vm_object_t m_object;
|
2013-05-12 16:50:18 +00:00
|
|
|
vm_page_t m, mpred;
|
2011-11-06 02:03:27 +00:00
|
|
|
int flags, req_class;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-05-13 07:09:31 +00:00
|
|
|
mpred = 0; /* XXX: pacify gcc */
|
2013-08-09 11:11:11 +00:00
|
|
|
KASSERT((object != NULL) == ((req & VM_ALLOC_NOOBJ) == 0) &&
|
|
|
|
(object != NULL || (req & VM_ALLOC_SBUSY) == 0) &&
|
|
|
|
((req & (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) !=
|
|
|
|
(VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)),
|
|
|
|
("vm_page_alloc: inconsistent object(%p)/req(%x)", (void *)object,
|
|
|
|
req));
|
2011-11-06 02:03:27 +00:00
|
|
|
if (object != NULL)
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2002-11-01 00:59:03 +00:00
|
|
|
|
2011-11-06 02:03:27 +00:00
|
|
|
req_class = req & VM_ALLOC_CLASS_MASK;
|
2011-01-16 17:33:34 +00:00
|
|
|
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
2011-11-06 02:03:27 +00:00
|
|
|
* The page daemon is allowed to dig deeper into the free page list.
|
1999-01-21 08:29:12 +00:00
|
|
|
*/
|
2011-11-06 02:03:27 +00:00
|
|
|
if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT)
|
|
|
|
req_class = VM_ALLOC_SYSTEM;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
2013-05-12 16:50:18 +00:00
|
|
|
if (object != NULL) {
|
|
|
|
mpred = vm_radix_lookup_le(&object->rtree, pindex);
|
|
|
|
KASSERT(mpred == NULL || mpred->pindex != pindex,
|
|
|
|
("vm_page_alloc: pindex already allocated"));
|
|
|
|
}
|
2013-08-09 11:28:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The page allocation request can came from consumers which already
|
|
|
|
* hold the free page queue mutex, like vm_page_insert() in
|
|
|
|
* vm_page_cache().
|
|
|
|
*/
|
|
|
|
mtx_lock_flags(&vm_page_queue_free_mtx, MTX_RECURSE);
|
2014-03-22 10:26:09 +00:00
|
|
|
if (vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_free_reserved ||
|
2011-11-17 06:54:49 +00:00
|
|
|
(req_class == VM_ALLOC_SYSTEM &&
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_interrupt_free_min) ||
|
2011-11-06 02:03:27 +00:00
|
|
|
(req_class == VM_ALLOC_INTERRUPT &&
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_free_count + vm_cnt.v_cache_count > 0)) {
|
1999-02-15 06:52:14 +00:00
|
|
|
/*
|
2003-01-08 19:58:42 +00:00
|
|
|
* Allocate from the free queue if the number of free pages
|
|
|
|
* exceeds the minimum for the request class.
|
1999-02-15 06:52:14 +00:00
|
|
|
*/
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
if (object != NULL &&
|
|
|
|
(m = vm_page_cache_lookup(object, pindex)) != NULL) {
|
|
|
|
if ((req & VM_ALLOC_IFNOTCACHED) != 0) {
|
2007-02-05 06:02:55 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
2006-02-15 22:29:53 +00:00
|
|
|
return (NULL);
|
|
|
|
}
|
2007-12-20 22:45:54 +00:00
|
|
|
if (vm_phys_unfree_page(m))
|
|
|
|
vm_phys_set_pool(VM_FREEPOOL_DEFAULT, m, 0);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
else if (!vm_reserv_reactivate_page(m))
|
|
|
|
#else
|
2007-12-20 22:45:54 +00:00
|
|
|
else
|
2007-12-29 19:53:04 +00:00
|
|
|
#endif
|
2007-12-20 22:45:54 +00:00
|
|
|
panic("vm_page_alloc: cache page %p is missing"
|
|
|
|
" from the free queue", m);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
} else if ((req & VM_ALLOC_IFCACHED) != 0) {
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
return (NULL);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
} else if (object == NULL || (object->flags & (OBJ_COLORED |
|
2013-05-12 16:50:18 +00:00
|
|
|
OBJ_FICTITIOUS)) != OBJ_COLORED || (m =
|
|
|
|
vm_reserv_alloc_page(object, pindex, mpred)) == NULL) {
|
2007-12-29 19:53:04 +00:00
|
|
|
#else
|
|
|
|
} else {
|
|
|
|
#endif
|
2007-07-14 21:21:17 +00:00
|
|
|
m = vm_phys_alloc_pages(object != NULL ?
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT, 0);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
2008-04-06 18:09:28 +00:00
|
|
|
if (m == NULL && vm_reserv_reclaim_inactive()) {
|
2007-12-29 19:53:04 +00:00
|
|
|
m = vm_phys_alloc_pages(object != NULL ?
|
|
|
|
VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT,
|
|
|
|
0);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
1999-02-15 06:52:14 +00:00
|
|
|
} else {
|
|
|
|
/*
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
* Not allocatable, give up.
|
1999-02-15 06:52:14 +00:00
|
|
|
*/
|
2007-02-05 06:02:55 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
2010-07-09 19:38:30 +00:00
|
|
|
atomic_add_int(&vm_pageout_deficit,
|
2011-11-06 02:03:27 +00:00
|
|
|
max((u_int)req >> VM_ALLOC_COUNT_SHIFT, 1));
|
1999-02-15 06:52:14 +00:00
|
|
|
pagedaemon_wakeup();
|
|
|
|
return (NULL);
|
1995-03-01 23:30:04 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
1999-02-15 06:52:14 +00:00
|
|
|
/*
|
|
|
|
* At this point we had better have found a good page.
|
|
|
|
*/
|
2009-06-21 00:21:33 +00:00
|
|
|
KASSERT(m != NULL, ("vm_page_alloc: missing page"));
|
2009-07-12 23:31:20 +00:00
|
|
|
KASSERT(m->queue == PQ_NONE,
|
|
|
|
("vm_page_alloc: page %p has unexpected queue %d", m, m->queue));
|
2009-06-21 00:21:33 +00:00
|
|
|
KASSERT(m->wire_count == 0, ("vm_page_alloc: page %p is wired", m));
|
|
|
|
KASSERT(m->hold_count == 0, ("vm_page_alloc: page %p is held", m));
|
2013-08-09 11:11:11 +00:00
|
|
|
KASSERT(!vm_page_sbusied(m),
|
|
|
|
("vm_page_alloc: page %p is busy", m));
|
2009-06-21 00:21:33 +00:00
|
|
|
KASSERT(m->dirty == 0, ("vm_page_alloc: page %p is dirty", m));
|
2009-07-12 23:31:20 +00:00
|
|
|
KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT,
|
|
|
|
("vm_page_alloc: page %p has unexpected memattr %d", m,
|
|
|
|
pmap_page_get_memattr(m)));
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
if ((m->flags & PG_CACHED) != 0) {
|
2011-11-06 02:03:27 +00:00
|
|
|
KASSERT((m->flags & PG_ZERO) == 0,
|
|
|
|
("vm_page_alloc: cached page %p is PG_ZERO", m));
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
KASSERT(m->valid != 0,
|
|
|
|
("vm_page_alloc: cached page %p is invalid", m));
|
|
|
|
if (m->object == object && m->pindex == pindex)
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_reactivated++;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
else
|
|
|
|
m->valid = 0;
|
|
|
|
m_object = m->object;
|
|
|
|
vm_page_cache_remove(m);
|
2013-03-09 02:05:29 +00:00
|
|
|
if (m_object->type == OBJT_VNODE &&
|
|
|
|
vm_object_cache_is_empty(m_object))
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
vp = m_object->handle;
|
|
|
|
} else {
|
|
|
|
KASSERT(m->valid == 0,
|
|
|
|
("vm_page_alloc: free page %p is valid", m));
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_phys_freecnt_adj(m, -1);
|
2013-12-31 18:25:15 +00:00
|
|
|
if ((m->flags & PG_ZERO) != 0)
|
|
|
|
vm_page_zero_count--;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
}
|
2013-12-31 18:25:15 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
1999-01-21 08:29:12 +00:00
|
|
|
|
1999-02-15 06:52:14 +00:00
|
|
|
/*
|
2013-12-31 18:25:15 +00:00
|
|
|
* Initialize the page. Only the PG_ZERO flag is inherited.
|
1999-02-15 06:52:14 +00:00
|
|
|
*/
|
2006-10-22 04:28:14 +00:00
|
|
|
flags = 0;
|
2013-12-31 18:25:15 +00:00
|
|
|
if ((req & VM_ALLOC_ZERO) != 0)
|
|
|
|
flags = PG_ZERO;
|
|
|
|
flags &= m->flags;
|
|
|
|
if ((req & VM_ALLOC_NODUMP) != 0)
|
2012-11-21 06:26:18 +00:00
|
|
|
flags |= PG_NODUMP;
|
2003-01-12 23:32:46 +00:00
|
|
|
m->flags = flags;
|
2011-09-06 10:30:11 +00:00
|
|
|
m->aflags = 0;
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
m->oflags = object == NULL || (object->flags & OBJ_UNMANAGED) != 0 ?
|
|
|
|
VPO_UNMANAGED : 0;
|
2013-08-09 11:11:11 +00:00
|
|
|
m->busy_lock = VPB_UNBUSIED;
|
|
|
|
if ((req & (VM_ALLOC_NOBUSY | VM_ALLOC_NOOBJ | VM_ALLOC_SBUSY)) == 0)
|
|
|
|
m->busy_lock = VPB_SINGLE_EXCLUSIVER;
|
|
|
|
if ((req & VM_ALLOC_SBUSY) != 0)
|
|
|
|
m->busy_lock = VPB_SHARERS_WORD(1);
|
2002-07-18 04:08:10 +00:00
|
|
|
if (req & VM_ALLOC_WIRED) {
|
2011-01-30 23:55:48 +00:00
|
|
|
/*
|
|
|
|
* The page lock is not required for wiring a page until that
|
|
|
|
* page is inserted into the object.
|
|
|
|
*/
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_add_int(&vm_cnt.v_wire_count, 1);
|
2002-07-18 04:08:10 +00:00
|
|
|
m->wire_count = 1;
|
2009-06-21 00:21:33 +00:00
|
|
|
}
|
2011-01-16 18:01:39 +00:00
|
|
|
m->act_count = 0;
|
1995-01-10 09:19:52 +00:00
|
|
|
|
2009-07-12 23:31:20 +00:00
|
|
|
if (object != NULL) {
|
2013-08-09 11:28:55 +00:00
|
|
|
if (vm_page_insert_after(m, object, pindex, mpred)) {
|
|
|
|
/* See the comment below about hold count. */
|
|
|
|
if (vp != NULL)
|
|
|
|
vdrop(vp);
|
|
|
|
pagedaemon_wakeup();
|
2013-08-15 11:01:25 +00:00
|
|
|
if (req & VM_ALLOC_WIRED) {
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_subtract_int(&vm_cnt.v_wire_count, 1);
|
2013-08-15 11:01:25 +00:00
|
|
|
m->wire_count = 0;
|
|
|
|
}
|
2013-08-09 11:28:55 +00:00
|
|
|
m->object = NULL;
|
|
|
|
vm_page_free(m);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
|
2009-07-18 01:50:05 +00:00
|
|
|
/* Ignore device objects; the pager sets "memattr" for them. */
|
|
|
|
if (object->memattr != VM_MEMATTR_DEFAULT &&
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
(object->flags & OBJ_FICTITIOUS) == 0)
|
2009-07-12 23:31:20 +00:00
|
|
|
pmap_page_set_memattr(m, object->memattr);
|
|
|
|
} else
|
2003-09-22 00:56:13 +00:00
|
|
|
m->pindex = pindex;
|
1995-01-10 09:19:52 +00:00
|
|
|
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
/*
|
|
|
|
* The following call to vdrop() must come after the above call
|
|
|
|
* to vm_page_insert() in case both affect the same object and
|
|
|
|
* vnode. Otherwise, the affected vnode's hold count could
|
|
|
|
* temporarily become zero.
|
|
|
|
*/
|
|
|
|
if (vp != NULL)
|
|
|
|
vdrop(vp);
|
|
|
|
|
1995-03-01 23:30:04 +00:00
|
|
|
/*
|
|
|
|
* Don't wakeup too often - wakeup the pageout daemon when
|
|
|
|
* we would be nearly out of memory.
|
|
|
|
*/
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
if (vm_paging_needed())
|
1995-03-01 23:30:04 +00:00
|
|
|
pagedaemon_wakeup();
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1996-01-19 04:00:31 +00:00
|
|
|
return (m);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2013-08-10 17:36:42 +00:00
|
|
|
static void
|
|
|
|
vm_page_alloc_contig_vdrop(struct spglist *lst)
|
|
|
|
{
|
|
|
|
|
|
|
|
while (!SLIST_EMPTY(lst)) {
|
|
|
|
vdrop((struct vnode *)SLIST_FIRST(lst)-> plinks.s.pv);
|
|
|
|
SLIST_REMOVE_HEAD(lst, plinks.s.ss);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
/*
|
|
|
|
* vm_page_alloc_contig:
|
|
|
|
*
|
|
|
|
* Allocate a contiguous set of physical pages of the given size "npages"
|
|
|
|
* from the free lists. All of the physical pages must be at or above
|
|
|
|
* the given physical address "low" and below the given physical address
|
|
|
|
* "high". The given value "alignment" determines the alignment of the
|
|
|
|
* first physical page in the set. If the given value "boundary" is
|
|
|
|
* non-zero, then the set of physical pages cannot cross any physical
|
|
|
|
* address boundary that is a multiple of that value. Both "alignment"
|
|
|
|
* and "boundary" must be a power of two.
|
|
|
|
*
|
|
|
|
* If the specified memory attribute, "memattr", is VM_MEMATTR_DEFAULT,
|
|
|
|
* then the memory attribute setting for the physical pages is configured
|
|
|
|
* to the object's memory attribute setting. Otherwise, the memory
|
|
|
|
* attribute setting for the physical pages is configured to "memattr",
|
|
|
|
* overriding the object's memory attribute setting. However, if the
|
|
|
|
* object's memory attribute setting is not VM_MEMATTR_DEFAULT, then the
|
|
|
|
* memory attribute setting for the physical pages cannot be configured
|
|
|
|
* to VM_MEMATTR_DEFAULT.
|
|
|
|
*
|
|
|
|
* The caller must always specify an allocation class.
|
|
|
|
*
|
|
|
|
* allocation classes:
|
|
|
|
* VM_ALLOC_NORMAL normal process request
|
|
|
|
* VM_ALLOC_SYSTEM system *really* needs a page
|
|
|
|
* VM_ALLOC_INTERRUPT interrupt time request
|
|
|
|
*
|
|
|
|
* optional allocation flags:
|
2013-08-09 11:11:11 +00:00
|
|
|
* VM_ALLOC_NOBUSY do not exclusive busy the page
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
* VM_ALLOC_NOOBJ page is not associated with an object and
|
2013-08-09 11:11:11 +00:00
|
|
|
* should not be exclusive busy
|
|
|
|
* VM_ALLOC_SBUSY shared busy the allocated page
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
* VM_ALLOC_WIRED wire the allocated page
|
|
|
|
* VM_ALLOC_ZERO prefer a zeroed page
|
|
|
|
*
|
|
|
|
* This routine may not sleep.
|
|
|
|
*/
|
|
|
|
vm_page_t
|
|
|
|
vm_page_alloc_contig(vm_object_t object, vm_pindex_t pindex, int req,
|
|
|
|
u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment,
|
|
|
|
vm_paddr_t boundary, vm_memattr_t memattr)
|
|
|
|
{
|
|
|
|
struct vnode *drop;
|
2013-08-10 17:36:42 +00:00
|
|
|
struct spglist deferred_vdrop_list;
|
|
|
|
vm_page_t m, m_tmp, m_ret;
|
2013-12-31 18:25:15 +00:00
|
|
|
u_int flags;
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
int req_class;
|
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
KASSERT((object != NULL) == ((req & VM_ALLOC_NOOBJ) == 0) &&
|
|
|
|
(object != NULL || (req & VM_ALLOC_SBUSY) == 0) &&
|
|
|
|
((req & (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) !=
|
|
|
|
(VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)),
|
|
|
|
("vm_page_alloc: inconsistent object(%p)/req(%x)", (void *)object,
|
|
|
|
req));
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
if (object != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
KASSERT(object->type == OBJT_PHYS,
|
|
|
|
("vm_page_alloc_contig: object %p isn't OBJT_PHYS",
|
|
|
|
object));
|
|
|
|
}
|
|
|
|
KASSERT(npages > 0, ("vm_page_alloc_contig: npages is zero"));
|
|
|
|
req_class = req & VM_ALLOC_CLASS_MASK;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The page daemon is allowed to dig deeper into the free page list.
|
|
|
|
*/
|
|
|
|
if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT)
|
|
|
|
req_class = VM_ALLOC_SYSTEM;
|
|
|
|
|
2013-08-10 17:36:42 +00:00
|
|
|
SLIST_INIT(&deferred_vdrop_list);
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
2014-03-22 10:26:09 +00:00
|
|
|
if (vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages +
|
|
|
|
vm_cnt.v_free_reserved || (req_class == VM_ALLOC_SYSTEM &&
|
|
|
|
vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages +
|
|
|
|
vm_cnt.v_interrupt_free_min) || (req_class == VM_ALLOC_INTERRUPT &&
|
|
|
|
vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages)) {
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
retry:
|
2011-12-05 18:29:25 +00:00
|
|
|
if (object == NULL || (object->flags & OBJ_COLORED) == 0 ||
|
|
|
|
(m_ret = vm_reserv_alloc_contig(object, pindex, npages,
|
|
|
|
low, high, alignment, boundary)) == NULL)
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
#endif
|
2011-12-05 18:29:25 +00:00
|
|
|
m_ret = vm_phys_alloc_contig(npages, low, high,
|
|
|
|
alignment, boundary);
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
} else {
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
atomic_add_int(&vm_pageout_deficit, npages);
|
|
|
|
pagedaemon_wakeup();
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
if (m_ret != NULL)
|
|
|
|
for (m = m_ret; m < &m_ret[npages]; m++) {
|
|
|
|
drop = vm_page_alloc_init(m);
|
|
|
|
if (drop != NULL) {
|
|
|
|
/*
|
|
|
|
* Enqueue the vnode for deferred vdrop().
|
|
|
|
*/
|
2013-08-10 17:36:42 +00:00
|
|
|
m->plinks.s.pv = drop;
|
|
|
|
SLIST_INSERT_HEAD(&deferred_vdrop_list, m,
|
|
|
|
plinks.s.ss);
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
#if VM_NRESERVLEVEL > 0
|
2011-12-05 18:29:25 +00:00
|
|
|
if (vm_reserv_reclaim_contig(npages, low, high, alignment,
|
|
|
|
boundary))
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
goto retry;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
if (m_ret == NULL)
|
|
|
|
return (NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the pages. Only the PG_ZERO flag is inherited.
|
|
|
|
*/
|
|
|
|
flags = 0;
|
|
|
|
if ((req & VM_ALLOC_ZERO) != 0)
|
|
|
|
flags = PG_ZERO;
|
2012-01-27 20:18:31 +00:00
|
|
|
if ((req & VM_ALLOC_NODUMP) != 0)
|
|
|
|
flags |= PG_NODUMP;
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
if ((req & VM_ALLOC_WIRED) != 0)
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_add_int(&vm_cnt.v_wire_count, npages);
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
if (object != NULL) {
|
|
|
|
if (object->memattr != VM_MEMATTR_DEFAULT &&
|
|
|
|
memattr == VM_MEMATTR_DEFAULT)
|
|
|
|
memattr = object->memattr;
|
|
|
|
}
|
|
|
|
for (m = m_ret; m < &m_ret[npages]; m++) {
|
|
|
|
m->aflags = 0;
|
2012-07-17 02:36:59 +00:00
|
|
|
m->flags = (m->flags | PG_NODUMP) & flags;
|
2013-08-09 11:11:11 +00:00
|
|
|
m->busy_lock = VPB_UNBUSIED;
|
|
|
|
if (object != NULL) {
|
|
|
|
if ((req & (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) == 0)
|
|
|
|
m->busy_lock = VPB_SINGLE_EXCLUSIVER;
|
|
|
|
if ((req & VM_ALLOC_SBUSY) != 0)
|
|
|
|
m->busy_lock = VPB_SHARERS_WORD(1);
|
|
|
|
}
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
if ((req & VM_ALLOC_WIRED) != 0)
|
|
|
|
m->wire_count = 1;
|
|
|
|
/* Unmanaged pages don't use "act_count". */
|
2013-12-31 18:25:15 +00:00
|
|
|
m->oflags = VPO_UNMANAGED;
|
2013-08-09 11:28:55 +00:00
|
|
|
if (object != NULL) {
|
|
|
|
if (vm_page_insert(m, object, pindex)) {
|
2013-08-10 17:36:42 +00:00
|
|
|
vm_page_alloc_contig_vdrop(
|
|
|
|
&deferred_vdrop_list);
|
2013-08-09 11:28:55 +00:00
|
|
|
if (vm_paging_needed())
|
|
|
|
pagedaemon_wakeup();
|
2013-08-15 11:01:25 +00:00
|
|
|
if ((req & VM_ALLOC_WIRED) != 0)
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_subtract_int(&vm_cnt.v_wire_count,
|
2013-08-15 11:01:25 +00:00
|
|
|
npages);
|
2013-08-11 21:15:04 +00:00
|
|
|
for (m_tmp = m, m = m_ret;
|
2013-08-09 11:28:55 +00:00
|
|
|
m < &m_ret[npages]; m++) {
|
2013-08-15 11:01:25 +00:00
|
|
|
if ((req & VM_ALLOC_WIRED) != 0)
|
|
|
|
m->wire_count = 0;
|
2013-08-11 21:15:04 +00:00
|
|
|
if (m >= m_tmp)
|
2013-08-09 11:28:55 +00:00
|
|
|
m->object = NULL;
|
|
|
|
vm_page_free(m);
|
|
|
|
}
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
m->pindex = pindex;
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
if (memattr != VM_MEMATTR_DEFAULT)
|
|
|
|
pmap_page_set_memattr(m, memattr);
|
|
|
|
pindex++;
|
|
|
|
}
|
2013-08-10 17:36:42 +00:00
|
|
|
vm_page_alloc_contig_vdrop(&deferred_vdrop_list);
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
if (vm_paging_needed())
|
|
|
|
pagedaemon_wakeup();
|
|
|
|
return (m_ret);
|
|
|
|
}
|
|
|
|
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
/*
|
|
|
|
* Initialize a page that has been freshly dequeued from a freelist.
|
|
|
|
* The caller has to drop the vnode returned, if it is not NULL.
|
|
|
|
*
|
2011-11-02 05:42:51 +00:00
|
|
|
* This function may only be used to initialize unmanaged pages.
|
|
|
|
*
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
* To be called with vm_page_queue_free_mtx held.
|
|
|
|
*/
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
static struct vnode *
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
vm_page_alloc_init(vm_page_t m)
|
|
|
|
{
|
|
|
|
struct vnode *drop;
|
|
|
|
vm_object_t m_object;
|
|
|
|
|
|
|
|
KASSERT(m->queue == PQ_NONE,
|
|
|
|
("vm_page_alloc_init: page %p has unexpected queue %d",
|
|
|
|
m, m->queue));
|
|
|
|
KASSERT(m->wire_count == 0,
|
|
|
|
("vm_page_alloc_init: page %p is wired", m));
|
|
|
|
KASSERT(m->hold_count == 0,
|
|
|
|
("vm_page_alloc_init: page %p is held", m));
|
2013-08-09 11:11:11 +00:00
|
|
|
KASSERT(!vm_page_sbusied(m),
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
("vm_page_alloc_init: page %p is busy", m));
|
|
|
|
KASSERT(m->dirty == 0,
|
|
|
|
("vm_page_alloc_init: page %p is dirty", m));
|
|
|
|
KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT,
|
|
|
|
("vm_page_alloc_init: page %p has unexpected memattr %d",
|
|
|
|
m, pmap_page_get_memattr(m)));
|
|
|
|
mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
|
|
|
|
drop = NULL;
|
|
|
|
if ((m->flags & PG_CACHED) != 0) {
|
2011-11-02 05:42:51 +00:00
|
|
|
KASSERT((m->flags & PG_ZERO) == 0,
|
|
|
|
("vm_page_alloc_init: cached page %p is PG_ZERO", m));
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
m->valid = 0;
|
|
|
|
m_object = m->object;
|
|
|
|
vm_page_cache_remove(m);
|
2013-03-09 02:05:29 +00:00
|
|
|
if (m_object->type == OBJT_VNODE &&
|
|
|
|
vm_object_cache_is_empty(m_object))
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
drop = m_object->handle;
|
|
|
|
} else {
|
|
|
|
KASSERT(m->valid == 0,
|
|
|
|
("vm_page_alloc_init: free page %p is valid", m));
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_phys_freecnt_adj(m, -1);
|
2011-11-02 05:42:51 +00:00
|
|
|
if ((m->flags & PG_ZERO) != 0)
|
|
|
|
vm_page_zero_count--;
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
}
|
|
|
|
return (drop);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_alloc_freelist:
|
2011-11-02 05:42:51 +00:00
|
|
|
*
|
|
|
|
* Allocate a physical page from the specified free page list.
|
|
|
|
*
|
|
|
|
* The caller must always specify an allocation class.
|
|
|
|
*
|
|
|
|
* allocation classes:
|
|
|
|
* VM_ALLOC_NORMAL normal process request
|
|
|
|
* VM_ALLOC_SYSTEM system *really* needs a page
|
|
|
|
* VM_ALLOC_INTERRUPT interrupt time request
|
|
|
|
*
|
|
|
|
* optional allocation flags:
|
2011-11-06 02:03:27 +00:00
|
|
|
* VM_ALLOC_COUNT(number) the number of additional pages that the caller
|
|
|
|
* intends to allocate
|
2011-11-02 05:42:51 +00:00
|
|
|
* VM_ALLOC_WIRED wire the allocated page
|
|
|
|
* VM_ALLOC_ZERO prefer a zeroed page
|
|
|
|
*
|
|
|
|
* This routine may not sleep.
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
*/
|
|
|
|
vm_page_t
|
2010-11-28 05:51:31 +00:00
|
|
|
vm_page_alloc_freelist(int flind, int req)
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
{
|
|
|
|
struct vnode *drop;
|
|
|
|
vm_page_t m;
|
2011-11-02 05:42:51 +00:00
|
|
|
u_int flags;
|
2011-11-06 02:03:27 +00:00
|
|
|
int req_class;
|
|
|
|
|
|
|
|
req_class = req & VM_ALLOC_CLASS_MASK;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The page daemon is allowed to dig deeper into the free page list.
|
|
|
|
*/
|
|
|
|
if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT)
|
|
|
|
req_class = VM_ALLOC_SYSTEM;
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not allocate reserved pages unless the req has asked for it.
|
|
|
|
*/
|
2013-08-21 15:31:43 +00:00
|
|
|
mtx_lock_flags(&vm_page_queue_free_mtx, MTX_RECURSE);
|
2014-03-22 10:26:09 +00:00
|
|
|
if (vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_free_reserved ||
|
2011-11-17 06:54:49 +00:00
|
|
|
(req_class == VM_ALLOC_SYSTEM &&
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_interrupt_free_min) ||
|
2011-11-06 02:03:27 +00:00
|
|
|
(req_class == VM_ALLOC_INTERRUPT &&
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_free_count + vm_cnt.v_cache_count > 0))
|
2010-11-28 05:51:31 +00:00
|
|
|
m = vm_phys_alloc_freelist_pages(flind, VM_FREEPOOL_DIRECT, 0);
|
2011-11-06 02:03:27 +00:00
|
|
|
else {
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
atomic_add_int(&vm_pageout_deficit,
|
|
|
|
max((u_int)req >> VM_ALLOC_COUNT_SHIFT, 1));
|
|
|
|
pagedaemon_wakeup();
|
|
|
|
return (NULL);
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
}
|
|
|
|
if (m == NULL) {
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
drop = vm_page_alloc_init(m);
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
2011-11-02 05:42:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the page. Only the PG_ZERO flag is inherited.
|
|
|
|
*/
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
m->aflags = 0;
|
2011-11-02 05:42:51 +00:00
|
|
|
flags = 0;
|
|
|
|
if ((req & VM_ALLOC_ZERO) != 0)
|
|
|
|
flags = PG_ZERO;
|
|
|
|
m->flags &= flags;
|
|
|
|
if ((req & VM_ALLOC_WIRED) != 0) {
|
|
|
|
/*
|
|
|
|
* The page lock is not required for wiring a page that does
|
|
|
|
* not belong to an object.
|
|
|
|
*/
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_add_int(&vm_cnt.v_wire_count, 1);
|
2011-11-02 05:42:51 +00:00
|
|
|
m->wire_count = 1;
|
|
|
|
}
|
Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig(). This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig(). For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space. It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages. Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary. That said, at present, this new function can't be applied to all
types of vm objects. However, that restriction will be eliminated in the
coming weeks.
From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations. Now, vm_page_alloc_contig() is responsible
for these things.
Reviewed by: kib
Discussed with: jhb
2011-11-16 16:46:09 +00:00
|
|
|
/* Unmanaged pages don't use "act_count". */
|
|
|
|
m->oflags = VPO_UNMANAGED;
|
2011-11-02 05:42:51 +00:00
|
|
|
if (drop != NULL)
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
vdrop(drop);
|
2011-11-02 05:42:51 +00:00
|
|
|
if (vm_paging_needed())
|
|
|
|
pagedaemon_wakeup();
|
Redo the page table page allocation on MIPS, as suggested by
alc@.
The UMA zone based allocation is replaced by a scheme that creates
a new free page list for the KSEG0 region, and a new function
in sys/vm that allocates pages from a specific free page list.
This also fixes a race condition introduced by the UMA based page table
page allocation code. Dropping the page queue and pmap locks before
the call to uma_zfree, and re-acquiring them afterwards will introduce
a race condtion(noted by alc@).
The changes are :
- Revert the earlier changes in MIPS pmap.c that added UMA zone for
page table pages.
- Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that
is not directly mapped (in 32bit kernel). Normal page allocations will first
try the HIGHMEM freelist and then the default(direct mapped) freelist.
- Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int
order, int req)' to vm/vm_page.c to allocate a page from a specified
freelist. The MIPS page table pages will be allocated using this function
from the freelist containing direct mapped pages.
- Move the page initialization code from vm_phys_alloc_contig() to a
new function vm_page_alloc_init(), and use this function to initialize
pages in vm_page_alloc_freelist() too.
- Split the function vm_phys_alloc_pages(int pool, int order) to create
vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use
this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages().
Reviewed by: alc
2010-07-21 09:27:00 +00:00
|
|
|
return (m);
|
|
|
|
}
|
|
|
|
|
1998-12-23 01:52:47 +00:00
|
|
|
/*
|
|
|
|
* vm_wait: (also see VM_WAIT macro)
|
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* Sleep until free pages are available for allocation.
|
2002-02-19 18:34:02 +00:00
|
|
|
* - Called in various places before memory allocations.
|
1998-12-23 01:52:47 +00:00
|
|
|
*/
|
1996-11-28 23:15:07 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_wait(void)
|
1996-11-28 23:15:07 +00:00
|
|
|
{
|
|
|
|
|
2007-02-07 06:37:30 +00:00
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
1996-11-28 23:15:07 +00:00
|
|
|
if (curproc == pageproc) {
|
|
|
|
vm_pageout_pages_needed = 1;
|
2007-02-07 06:37:30 +00:00
|
|
|
msleep(&vm_pageout_pages_needed, &vm_page_queue_free_mtx,
|
2003-02-01 21:18:16 +00:00
|
|
|
PDROP | PSWP, "VMWait", 0);
|
1996-11-28 23:15:07 +00:00
|
|
|
} else {
|
|
|
|
if (!vm_pages_needed) {
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
vm_pages_needed = 1;
|
1996-11-28 23:15:07 +00:00
|
|
|
wakeup(&vm_pages_needed);
|
|
|
|
}
|
2014-03-22 10:26:09 +00:00
|
|
|
msleep(&vm_cnt.v_free_count, &vm_page_queue_free_mtx, PDROP | PVM,
|
2007-05-31 22:52:15 +00:00
|
|
|
"vmwait", 0);
|
1996-11-28 23:15:07 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-02-19 18:34:02 +00:00
|
|
|
/*
|
|
|
|
* vm_waitpfault: (also see VM_WAITPFAULT macro)
|
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* Sleep until free pages are available for allocation.
|
2002-02-19 18:34:02 +00:00
|
|
|
* - Called only in vm_fault so that processes page faulting
|
|
|
|
* can be easily tracked.
|
2002-02-19 18:50:49 +00:00
|
|
|
* - Sleeps at a lower priority than vm_wait() so that vm_wait()ing
|
|
|
|
* processes will be able to grab memory first. Do not change
|
|
|
|
* this balance without careful testing first.
|
2002-02-19 18:34:02 +00:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_waitpfault(void)
|
|
|
|
{
|
|
|
|
|
2007-02-07 06:37:30 +00:00
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
2002-02-19 18:34:02 +00:00
|
|
|
if (!vm_pages_needed) {
|
|
|
|
vm_pages_needed = 1;
|
|
|
|
wakeup(&vm_pages_needed);
|
|
|
|
}
|
2014-03-22 10:26:09 +00:00
|
|
|
msleep(&vm_cnt.v_free_count, &vm_page_queue_free_mtx, PDROP | PUSER,
|
2003-02-01 21:18:16 +00:00
|
|
|
"pfault", 0);
|
2002-02-19 18:34:02 +00:00
|
|
|
}
|
|
|
|
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
struct vm_pagequeue *
|
|
|
|
vm_page_pagequeue(vm_page_t m)
|
|
|
|
{
|
|
|
|
|
|
|
|
return (&vm_phys_domain(m)->vmd_pagequeues[m->queue]);
|
|
|
|
}
|
|
|
|
|
2010-05-09 16:55:42 +00:00
|
|
|
/*
|
2012-11-13 02:50:39 +00:00
|
|
|
* vm_page_dequeue:
|
2010-05-09 16:55:42 +00:00
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* Remove the given page from its current page queue.
|
2010-05-09 16:55:42 +00:00
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* The page must be locked.
|
2010-05-09 16:55:42 +00:00
|
|
|
*/
|
2012-11-13 02:50:39 +00:00
|
|
|
void
|
|
|
|
vm_page_dequeue(vm_page_t m)
|
2010-05-09 16:55:42 +00:00
|
|
|
{
|
2012-11-13 02:50:39 +00:00
|
|
|
struct vm_pagequeue *pq;
|
2010-05-09 16:55:42 +00:00
|
|
|
|
2014-01-24 19:08:42 +00:00
|
|
|
vm_page_assert_locked(m);
|
2014-06-16 18:15:27 +00:00
|
|
|
KASSERT(m->queue < PQ_COUNT, ("vm_page_dequeue: page %p is not queued",
|
|
|
|
m));
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
pq = vm_page_pagequeue(m);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_lock(pq);
|
|
|
|
m->queue = PQ_NONE;
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_REMOVE(&pq->pq_pl, m, plinks.q);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_pagequeue_cnt_dec(pq);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_unlock(pq);
|
2010-05-09 16:55:42 +00:00
|
|
|
}
|
|
|
|
|
2008-03-18 06:52:15 +00:00
|
|
|
/*
|
2012-11-13 02:50:39 +00:00
|
|
|
* vm_page_dequeue_locked:
|
2008-03-18 06:52:15 +00:00
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* Remove the given page from its current page queue.
|
2008-03-18 06:52:15 +00:00
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* The page and page queue must be locked.
|
2008-03-18 06:52:15 +00:00
|
|
|
*/
|
|
|
|
void
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_page_dequeue_locked(vm_page_t m)
|
2008-03-18 06:52:15 +00:00
|
|
|
{
|
2012-11-13 02:50:39 +00:00
|
|
|
struct vm_pagequeue *pq;
|
2008-03-18 06:52:15 +00:00
|
|
|
|
2010-05-09 16:55:42 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
pq = vm_page_pagequeue(m);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_assert_locked(pq);
|
|
|
|
m->queue = PQ_NONE;
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_REMOVE(&pq->pq_pl, m, plinks.q);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_pagequeue_cnt_dec(pq);
|
2008-03-18 06:52:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_enqueue:
|
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* Add the given page to the specified page queue.
|
2008-03-18 06:52:15 +00:00
|
|
|
*
|
2012-11-13 02:50:39 +00:00
|
|
|
* The page must be locked.
|
2008-03-18 06:52:15 +00:00
|
|
|
*/
|
|
|
|
static void
|
2014-06-16 18:15:27 +00:00
|
|
|
vm_page_enqueue(uint8_t queue, vm_page_t m)
|
2008-03-18 06:52:15 +00:00
|
|
|
{
|
2012-11-13 02:50:39 +00:00
|
|
|
struct vm_pagequeue *pq;
|
2008-03-18 06:52:15 +00:00
|
|
|
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
2014-06-16 18:15:27 +00:00
|
|
|
KASSERT(queue < PQ_COUNT,
|
|
|
|
("vm_page_enqueue: invalid queue %u request for page %p",
|
|
|
|
queue, m));
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
pq = &vm_phys_domain(m)->vmd_pagequeues[queue];
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_lock(pq);
|
2010-07-02 15:02:51 +00:00
|
|
|
m->queue = queue;
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_INSERT_TAIL(&pq->pq_pl, m, plinks.q);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_pagequeue_cnt_inc(pq);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_unlock(pq);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_requeue:
|
|
|
|
*
|
|
|
|
* Move the given page to the tail of its current page queue.
|
|
|
|
*
|
|
|
|
* The page must be locked.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_requeue(vm_page_t m)
|
|
|
|
{
|
|
|
|
struct vm_pagequeue *pq;
|
|
|
|
|
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
|
|
|
KASSERT(m->queue != PQ_NONE,
|
|
|
|
("vm_page_requeue: page %p is not queued", m));
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
pq = vm_page_pagequeue(m);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_lock(pq);
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_REMOVE(&pq->pq_pl, m, plinks.q);
|
|
|
|
TAILQ_INSERT_TAIL(&pq->pq_pl, m, plinks.q);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_unlock(pq);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_requeue_locked:
|
|
|
|
*
|
|
|
|
* Move the given page to the tail of its current page queue.
|
|
|
|
*
|
|
|
|
* The page queue must be locked.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_requeue_locked(vm_page_t m)
|
|
|
|
{
|
|
|
|
struct vm_pagequeue *pq;
|
|
|
|
|
|
|
|
KASSERT(m->queue != PQ_NONE,
|
|
|
|
("vm_page_requeue_locked: page %p is not queued", m));
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
pq = vm_page_pagequeue(m);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_assert_locked(pq);
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_REMOVE(&pq->pq_pl, m, plinks.q);
|
|
|
|
TAILQ_INSERT_TAIL(&pq->pq_pl, m, plinks.q);
|
2008-03-18 06:52:15 +00:00
|
|
|
}
|
|
|
|
|
1996-01-27 00:13:33 +00:00
|
|
|
/*
|
1996-05-18 03:38:05 +00:00
|
|
|
* vm_page_activate:
|
|
|
|
*
|
|
|
|
* Put the specified page on the active list (if appropriate).
|
1999-09-17 04:56:40 +00:00
|
|
|
* Ensure that act_count is at least ACT_INIT but do not otherwise
|
|
|
|
* mess with it.
|
1996-05-18 03:38:05 +00:00
|
|
|
*
|
2010-05-07 15:49:43 +00:00
|
|
|
* The page must be locked.
|
1996-01-27 00:13:33 +00:00
|
|
|
*/
|
1996-05-18 03:38:05 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_activate(vm_page_t m)
|
1994-09-27 18:00:29 +00:00
|
|
|
{
|
2010-05-09 16:55:42 +00:00
|
|
|
int queue;
|
1994-09-27 18:00:29 +00:00
|
|
|
|
2010-05-04 05:55:19 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
2010-07-02 15:02:51 +00:00
|
|
|
if ((queue = m->queue) != PQ_ACTIVE) {
|
2011-08-09 21:01:36 +00:00
|
|
|
if (m->wire_count == 0 && (m->oflags & VPO_UNMANAGED) == 0) {
|
1998-01-31 11:56:53 +00:00
|
|
|
if (m->act_count < ACT_INIT)
|
|
|
|
m->act_count = ACT_INIT;
|
2010-05-09 16:55:42 +00:00
|
|
|
if (queue != PQ_NONE)
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_page_dequeue(m);
|
2008-03-18 06:52:15 +00:00
|
|
|
vm_page_enqueue(PQ_ACTIVE, m);
|
2010-05-07 15:49:43 +00:00
|
|
|
} else
|
2010-05-09 16:55:42 +00:00
|
|
|
KASSERT(queue == PQ_NONE,
|
2010-05-07 15:49:43 +00:00
|
|
|
("vm_page_activate: wired page %p is queued", m));
|
1998-01-31 11:56:53 +00:00
|
|
|
} else {
|
This commit does a couple of things:
Re-enables the RSS limiting, and the routine is now tail-recursive,
making it much more safe (eliminates the possiblity of kernel stack
overflow.) Also, the RSS limiting is a little more intelligent about
finding the likely objects that are pushing the process over the limit.
Added some sysctls that help with VM system tuning.
New sysctl features:
1) Enable/disable lru pageout algorithm.
vm.pageout_algorithm = 0, default algorithm that works
well, especially using X windows and heavy
memory loading. Can have adverse effects,
sometimes slowing down program loading.
vm.pageout_algorithm = 1, close to true LRU. Works much
better than clock, etc. Does not work as well as
the default algorithm in general. Certain memory
"malloc" type benchmarks work a little better with
this setting.
Please give me feedback on the performance results
associated with these.
2) Enable/disable swapping.
vm.swapping_enabled = 1, default.
vm.swapping_enabled = 0, useful for cases where swapping
degrades performance.
The config option "NO_SWAPPING" is still operative, and
takes precedence over the sysctl. If "NO_SWAPPING" is
specified, the sysctl still exists, but "vm.swapping_enabled"
is hard-wired to "0".
Each of these can be changed "on the fly."
1996-06-26 05:39:27 +00:00
|
|
|
if (m->act_count < ACT_INIT)
|
|
|
|
m->act_count = ACT_INIT;
|
1994-09-27 18:00:29 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1996-06-05 03:31:49 +00:00
|
|
|
/*
|
1999-02-15 06:52:14 +00:00
|
|
|
* vm_page_free_wakeup:
|
|
|
|
*
|
|
|
|
* Helper routine for vm_page_free_toq() and vm_page_cache(). This
|
|
|
|
* routine is called when a page has been added to the cache or free
|
|
|
|
* queues.
|
1998-12-23 01:52:47 +00:00
|
|
|
*
|
2004-06-19 04:19:47 +00:00
|
|
|
* The page queues must be locked.
|
1996-06-05 03:31:49 +00:00
|
|
|
*/
|
2006-03-08 06:31:46 +00:00
|
|
|
static inline void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_free_wakeup(void)
|
1996-06-05 03:31:49 +00:00
|
|
|
{
|
2003-02-01 21:18:16 +00:00
|
|
|
|
2007-02-07 06:37:30 +00:00
|
|
|
mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
|
|
|
* if pageout daemon needs pages, then tell it that there are
|
|
|
|
* some free.
|
|
|
|
*/
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
if (vm_pageout_pages_needed &&
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_cache_count + vm_cnt.v_free_count >= vm_cnt.v_pageout_free_min) {
|
1999-01-21 08:29:12 +00:00
|
|
|
wakeup(&vm_pageout_pages_needed);
|
|
|
|
vm_pageout_pages_needed = 0;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* wakeup processes that are waiting on memory if we hit a
|
|
|
|
* high water mark. And wakeup scheduler process if we have
|
|
|
|
* lots of memory. this process will swapin processes.
|
|
|
|
*/
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
if (vm_pages_needed && !vm_page_count_min()) {
|
1999-01-21 08:29:12 +00:00
|
|
|
vm_pages_needed = 0;
|
2014-03-22 10:26:09 +00:00
|
|
|
wakeup(&vm_cnt.v_free_count);
|
1999-01-21 08:29:12 +00:00
|
|
|
}
|
|
|
|
}
|
1998-01-17 09:17:02 +00:00
|
|
|
|
2013-08-09 11:28:55 +00:00
|
|
|
/*
|
|
|
|
* Turn a cached page into a free page, by changing its attributes.
|
|
|
|
* Keep the statistics up-to-date.
|
|
|
|
*
|
|
|
|
* The free page queue must be locked.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vm_page_cache_turn_free(vm_page_t m)
|
|
|
|
{
|
|
|
|
|
|
|
|
mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
|
|
|
|
|
|
|
|
m->object = NULL;
|
|
|
|
m->valid = 0;
|
2013-12-31 18:25:15 +00:00
|
|
|
KASSERT((m->flags & PG_CACHED) != 0,
|
|
|
|
("vm_page_cache_turn_free: page %p is not cached", m));
|
|
|
|
m->flags &= ~PG_CACHED;
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_cache_count--;
|
2013-08-09 11:28:55 +00:00
|
|
|
vm_phys_freecnt_adj(m, 1);
|
|
|
|
}
|
|
|
|
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
|
|
|
* vm_page_free_toq:
|
|
|
|
*
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
* Returns the given page to the free list,
|
1999-01-21 08:29:12 +00:00
|
|
|
* disassociating it with any VM object.
|
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* The object must be locked. The page must be locked if it is managed.
|
1999-01-21 08:29:12 +00:00
|
|
|
*/
|
|
|
|
void
|
1999-02-08 00:37:36 +00:00
|
|
|
vm_page_free_toq(vm_page_t m)
|
1999-01-21 08:29:12 +00:00
|
|
|
{
|
|
|
|
|
2011-08-09 21:01:36 +00:00
|
|
|
if ((m->oflags & VPO_UNMANAGED) == 0) {
|
2010-05-06 16:39:43 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
|
|
|
KASSERT(!pmap_page_is_mapped(m),
|
|
|
|
("vm_page_free_toq: freeing mapped page %p", m));
|
2012-11-13 02:50:39 +00:00
|
|
|
} else
|
|
|
|
KASSERT(m->queue == PQ_NONE,
|
|
|
|
("vm_page_free_toq: unmanaged page %p is queued", m));
|
2007-06-04 21:45:18 +00:00
|
|
|
PCPU_INC(cnt.v_tfree);
|
1998-01-17 09:17:02 +00:00
|
|
|
|
2013-12-31 18:25:15 +00:00
|
|
|
if (vm_page_sbusied(m))
|
2010-11-19 17:49:08 +00:00
|
|
|
panic("vm_page_free: freeing busy page %p", m);
|
1996-06-05 03:31:49 +00:00
|
|
|
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
2012-10-03 05:06:45 +00:00
|
|
|
* Unqueue, then remove page. Note that we cannot destroy
|
1999-01-21 08:29:12 +00:00
|
|
|
* the page here because we do not want to call the pager's
|
|
|
|
* callback routine until after we've put the page on the
|
|
|
|
* appropriate free queue.
|
|
|
|
*/
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_page_remque(m);
|
1998-01-31 11:56:53 +00:00
|
|
|
vm_page_remove(m);
|
|
|
|
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
|
|
|
* If fictitious remove object association and
|
|
|
|
* return, otherwise delay object association removal.
|
|
|
|
*/
|
1996-06-05 03:31:49 +00:00
|
|
|
if ((m->flags & PG_FICTITIOUS) != 0) {
|
1999-01-21 08:29:12 +00:00
|
|
|
return;
|
1996-06-05 03:31:49 +00:00
|
|
|
}
|
1998-01-31 11:56:53 +00:00
|
|
|
|
1998-03-01 04:18:54 +00:00
|
|
|
m->valid = 0;
|
1999-08-17 05:08:39 +00:00
|
|
|
vm_page_undirty(m);
|
1998-03-01 04:18:54 +00:00
|
|
|
|
2010-11-19 17:49:08 +00:00
|
|
|
if (m->wire_count != 0)
|
|
|
|
panic("vm_page_free: freeing wired page %p", m);
|
2002-02-19 23:19:30 +00:00
|
|
|
if (m->hold_count != 0) {
|
|
|
|
m->flags &= ~PG_ZERO;
|
Replace the page hold queue, PQ_HOLD, by a new page flag, PG_UNHOLDFREE,
because the queue itself serves no purpose. When a held page is freed,
inserting the page into the hold queue has the side effect of setting the
page's "queue" field to PQ_HOLD. Later, when the page is unheld, it will
be freed because the "queue" field is PQ_HOLD. In other words, PQ_HOLD is
used as a flag, not a queue. So, this change replaces it with a flag.
To accomodate the new page flag, make the page's "flags" field wider and
"oflags" field narrower.
Reviewed by: kib
2012-10-29 06:15:04 +00:00
|
|
|
KASSERT((m->flags & PG_UNHOLDFREE) == 0,
|
|
|
|
("vm_page_free: freeing PG_UNHOLDFREE page %p", m));
|
|
|
|
m->flags |= PG_UNHOLDFREE;
|
1999-02-08 00:37:36 +00:00
|
|
|
} else {
|
2009-07-12 23:31:20 +00:00
|
|
|
/*
|
|
|
|
* Restore the default memory attribute to the page.
|
|
|
|
*/
|
|
|
|
if (pmap_page_get_memattr(m) != VM_MEMATTR_DEFAULT)
|
|
|
|
pmap_page_set_memattr(m, VM_MEMATTR_DEFAULT);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Insert the page into the physical memory allocator's
|
|
|
|
* cache/free page queues.
|
|
|
|
*/
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_phys_freecnt_adj(m, 1);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
if (!vm_reserv_free_page(m))
|
|
|
|
#else
|
|
|
|
if (TRUE)
|
|
|
|
#endif
|
|
|
|
vm_phys_free_pages(m, 0);
|
2007-12-11 21:20:34 +00:00
|
|
|
if ((m->flags & PG_ZERO) != 0)
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
++vm_page_zero_count;
|
2007-12-11 21:20:34 +00:00
|
|
|
else
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
vm_page_zero_idle_wakeup();
|
|
|
|
vm_page_free_wakeup();
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
1996-06-05 03:31:49 +00:00
|
|
|
}
|
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_page_wire:
|
|
|
|
*
|
|
|
|
* Mark this page as wired down by yet
|
|
|
|
* another map, removing it from paging queues
|
|
|
|
* as necessary.
|
|
|
|
*
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
* If the page is fictitious, then its wire count must remain one.
|
|
|
|
*
|
2010-05-05 03:45:46 +00:00
|
|
|
* The page must be locked.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_wire(vm_page_t m)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
|
2000-05-29 22:40:54 +00:00
|
|
|
/*
|
|
|
|
* Only bump the wire statistics if the page is not already wired,
|
|
|
|
* and only unqueue the page if it is on some queue (if it is unmanaged
|
|
|
|
* it is already off the queues).
|
|
|
|
*/
|
2010-05-03 17:55:32 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
if ((m->flags & PG_FICTITIOUS) != 0) {
|
|
|
|
KASSERT(m->wire_count == 1,
|
|
|
|
("vm_page_wire: fictitious page %p's wire count isn't one",
|
|
|
|
m));
|
2004-05-22 04:53:51 +00:00
|
|
|
return;
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
}
|
1996-01-19 04:00:31 +00:00
|
|
|
if (m->wire_count == 0) {
|
2012-11-13 02:50:39 +00:00
|
|
|
KASSERT((m->oflags & VPO_UNMANAGED) == 0 ||
|
|
|
|
m->queue == PQ_NONE,
|
|
|
|
("vm_page_wire: unmanaged page %p is queued", m));
|
|
|
|
vm_page_remque(m);
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_add_int(&vm_cnt.v_wire_count, 1);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-01-19 04:00:31 +00:00
|
|
|
m->wire_count++;
|
2001-08-22 04:01:56 +00:00
|
|
|
KASSERT(m->wire_count != 0, ("vm_page_wire: wire_count overflow m=%p", m));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
* vm_page_unwire:
|
1999-01-24 06:00:31 +00:00
|
|
|
*
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
* Release one wiring of the specified page, potentially enabling it to be
|
|
|
|
* paged again. If paging is enabled, then the value of the parameter
|
2014-06-16 18:15:27 +00:00
|
|
|
* "queue" determines the queue to which the page is added.
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
*
|
|
|
|
* However, unless the page belongs to an object, it is not enqueued because
|
|
|
|
* it cannot be paged out.
|
|
|
|
*
|
2013-06-24 13:36:16 +00:00
|
|
|
* If a page is fictitious, then its wire count must always be one.
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
*
|
|
|
|
* A managed page must be locked.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2014-06-16 18:15:27 +00:00
|
|
|
vm_page_unwire(vm_page_t m, uint8_t queue)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2014-06-16 18:15:27 +00:00
|
|
|
KASSERT(queue < PQ_COUNT,
|
|
|
|
("vm_page_unwire: invalid queue %u request for page %p",
|
|
|
|
queue, m));
|
2011-08-09 21:01:36 +00:00
|
|
|
if ((m->oflags & VPO_UNMANAGED) == 0)
|
2010-05-04 05:55:19 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
if ((m->flags & PG_FICTITIOUS) != 0) {
|
|
|
|
KASSERT(m->wire_count == 1,
|
|
|
|
("vm_page_unwire: fictitious page %p's wire count isn't one", m));
|
2004-05-22 04:53:51 +00:00
|
|
|
return;
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
}
|
1998-01-31 11:56:53 +00:00
|
|
|
if (m->wire_count > 0) {
|
1996-01-19 04:00:31 +00:00
|
|
|
m->wire_count--;
|
1998-01-31 11:56:53 +00:00
|
|
|
if (m->wire_count == 0) {
|
2014-03-22 10:26:09 +00:00
|
|
|
atomic_subtract_int(&vm_cnt.v_wire_count, 1);
|
2011-08-09 21:01:36 +00:00
|
|
|
if ((m->oflags & VPO_UNMANAGED) != 0 ||
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
m->object == NULL)
|
2010-05-05 03:45:46 +00:00
|
|
|
return;
|
2014-06-16 18:15:27 +00:00
|
|
|
if (queue == PQ_INACTIVE)
|
2011-09-06 10:30:11 +00:00
|
|
|
m->flags &= ~PG_WINATCFLS;
|
2014-06-16 18:15:27 +00:00
|
|
|
vm_page_enqueue(queue, m);
|
1998-01-31 11:56:53 +00:00
|
|
|
}
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
} else
|
|
|
|
panic("vm_page_unwire: page %p's wire count is zero", m);
|
1995-03-25 08:47:35 +00:00
|
|
|
}
|
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
2010-05-07 04:14:07 +00:00
|
|
|
* Move the specified page to the inactive queue.
|
1998-12-23 01:52:47 +00:00
|
|
|
*
|
Eliminate checks for a page having a NULL object in vm_pageout_scan()
and vm_pageout_page_stats(). These checks were recently introduced by
the first page locking commit, r207410, but they are not needed. At
the same time, eliminate some redundant accesses to the page's object
field. (These accesses should have neen eliminated by r207410.)
Make the assertion in vm_page_flag_set() stricter. Specifically, only
managed pages should have PG_WRITEABLE set.
Add a comment documenting an assertion to vm_page_flag_clear().
It has long been the case that fictitious pages have their wire count
permanently set to one. Add comments to vm_page_wire() and
vm_page_unwire() documenting this. Add assertions to these functions
as well.
Update the comment describing vm_page_unwire(). Much of the old
comment had little to do with vm_page_unwire(), but a lot to do with
_vm_page_deactivate(). Move relevant parts of the old comment to
_vm_page_deactivate().
Only pages that belong to an object can be paged out. Therefore, it
is pointless for vm_page_unwire() to acquire the page queues lock and
enqueue such pages in one of the paging queues. Generally speaking,
such pages are immediately freed after the call to vm_page_unwire().
Previously, it was the call to vm_page_free() that reacquired the page
queues lock and removed these pages from the paging queues. Now, we
will never acquire the page queues lock for this case. (It is also
worth noting that since both vm_page_unwire() and vm_page_free()
occurred with the page locked, the page daemon never saw the page with
its object field set to NULL.)
Change the panic with vm_page_unwire() to provide a more precise message.
Reviewed by: kib@
2010-06-14 19:54:19 +00:00
|
|
|
* Many pages placed on the inactive queue should actually go
|
|
|
|
* into the cache, but it is difficult to figure out which. What
|
|
|
|
* we do instead, if the inactive target is well met, is to put
|
|
|
|
* clean pages at the head of the inactive queue instead of the tail.
|
|
|
|
* This will cause them to be moved to the cache more quickly and
|
|
|
|
* if not actively re-referenced, reclaimed more quickly. If we just
|
|
|
|
* stick these pages at the end of the inactive queue, heavy filesystem
|
|
|
|
* meta-data accesses can cause an unnecessary paging load on memory bound
|
|
|
|
* processes. This optimization causes one-time-use metadata to be
|
|
|
|
* reused more quickly.
|
|
|
|
*
|
1999-09-17 04:56:40 +00:00
|
|
|
* Normally athead is 0 resulting in LRU operation. athead is set
|
|
|
|
* to 1 if we want this page to be 'as if it were placed in the cache',
|
|
|
|
* except without unmapping it from the process address space.
|
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* The page must be locked.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2006-03-08 06:31:46 +00:00
|
|
|
static inline void
|
1999-09-17 04:56:40 +00:00
|
|
|
_vm_page_deactivate(vm_page_t m, int athead)
|
1994-05-25 09:21:21 +00:00
|
|
|
{
|
2012-11-13 02:50:39 +00:00
|
|
|
struct vm_pagequeue *pq;
|
2010-05-09 16:55:42 +00:00
|
|
|
int queue;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2010-05-04 05:55:19 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
2004-06-19 04:19:47 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
1998-10-28 13:41:43 +00:00
|
|
|
* Ignore if already inactive.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2010-07-02 15:02:51 +00:00
|
|
|
if ((queue = m->queue) == PQ_INACTIVE)
|
1996-01-19 04:00:31 +00:00
|
|
|
return;
|
2011-08-09 21:01:36 +00:00
|
|
|
if (m->wire_count == 0 && (m->oflags & VPO_UNMANAGED) == 0) {
|
2010-05-09 16:55:42 +00:00
|
|
|
if (queue != PQ_NONE)
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_page_dequeue(m);
|
|
|
|
m->flags &= ~PG_WINATCFLS;
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
pq = &vm_phys_domain(m)->vmd_pagequeues[PQ_INACTIVE];
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_lock(pq);
|
|
|
|
m->queue = PQ_INACTIVE;
|
1999-09-17 04:56:40 +00:00
|
|
|
if (athead)
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_INSERT_HEAD(&pq->pq_pl, m, plinks.q);
|
1999-09-17 04:56:40 +00:00
|
|
|
else
|
2013-08-10 17:36:42 +00:00
|
|
|
TAILQ_INSERT_TAIL(&pq->pq_pl, m, plinks.q);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
vm_pagequeue_cnt_inc(pq);
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_pagequeue_unlock(pq);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2010-05-07 04:14:07 +00:00
|
|
|
/*
|
|
|
|
* Move the specified page to the inactive queue.
|
|
|
|
*
|
|
|
|
* The page must be locked.
|
|
|
|
*/
|
1999-09-17 04:56:40 +00:00
|
|
|
void
|
|
|
|
vm_page_deactivate(vm_page_t m)
|
|
|
|
{
|
2010-05-07 04:14:07 +00:00
|
|
|
|
|
|
|
_vm_page_deactivate(m, 0);
|
1999-09-17 04:56:40 +00:00
|
|
|
}
|
|
|
|
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
/*
|
|
|
|
* vm_page_try_to_cache:
|
|
|
|
*
|
|
|
|
* Returns 0 on failure, 1 on success
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_page_try_to_cache(vm_page_t m)
|
|
|
|
{
|
2001-05-19 01:28:09 +00:00
|
|
|
|
2010-05-02 23:33:10 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (m->dirty || m->hold_count || m->wire_count ||
|
|
|
|
(m->oflags & VPO_UNMANAGED) != 0 || vm_page_busied(m))
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
2004-02-14 08:54:37 +00:00
|
|
|
pmap_remove_all(m);
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
if (m->dirty)
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
vm_page_cache(m);
|
2002-03-10 21:52:48 +00:00
|
|
|
return (1);
|
Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
situations prior to now.
The new code is based on the concept that I/O must be able to function in
a low memory situation. All major modules related to I/O (except
networking) have been adjusted to allow allocation out of the system
reserve memory pool. These modules now detect a low memory situation but
rather then block they instead continue to operate, then return resources
to the memory pool instead of cache them or leave them wired.
Code has been added to stall in a low-memory situation prior to a vnode
being locked.
Thus situations where a process blocks in a low-memory condition while
holding a locked vnode have been reduced to near nothing. Not only will
I/O continue to operate, but many prior deadlock conditions simply no
longer exist.
Implement a number of VFS/BIO fixes
(found by Ian): in biodone(), bogus-page replacement code, the loop
was not properly incrementing loop variables prior to a continue
statement. We do not believe this code can be hit anyway but we
aren't taking any chances. We'll turn the whole section into a
panic (as it already is in brelse()) after the release is rolled.
In biodone(), the foff calculation was incorrectly
clamped to the iosize, causing the wrong foff to be calculated
for pages in the case of an I/O error or biodone() called without
initiating I/O. The problem always caused a panic before. Now it
doesn't. The problem is mainly an issue with NFS.
Fixed casts for ~PAGE_MASK. This code worked properly before only
because the calculations use signed arithmatic. Better to properly
extend PAGE_MASK first before inverting it for the 64 bit masking
op.
In brelse(), the bogus_page fixup code was improperly throwing
away the original contents of 'm' when it did the j-loop to
fix the bogus pages. The result was that it would potentially
invalidate parts of the *WRONG* page(!), leading to corruption.
There may still be cases where a background bitmap write is
being duplicated, causing potential corruption. We have identified
a potentially serious bug related to this but the fix is still TBD.
So instead this patch contains a KASSERT to detect the problem
and panic the machine rather then continue to corrupt the filesystem.
The problem does not occur very often.. it is very hard to
reproduce, and it may or may not be the cause of the corruption
people have reported.
Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
|
|
|
}
|
|
|
|
|
2001-05-24 07:22:27 +00:00
|
|
|
/*
|
|
|
|
* vm_page_try_to_free()
|
|
|
|
*
|
|
|
|
* Attempt to free the page. If we cannot free it, we do nothing.
|
|
|
|
* 1 is returned on success, 0 on failure.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_try_to_free(vm_page_t m)
|
2001-05-24 07:22:27 +00:00
|
|
|
{
|
2002-07-20 20:12:57 +00:00
|
|
|
|
2010-05-02 23:33:10 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
2003-06-19 01:50:14 +00:00
|
|
|
if (m->object != NULL)
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (m->dirty || m->hold_count || m->wire_count ||
|
|
|
|
(m->oflags & VPO_UNMANAGED) != 0 || vm_page_busied(m))
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
2004-02-19 07:43:55 +00:00
|
|
|
pmap_remove_all(m);
|
2001-05-24 07:22:27 +00:00
|
|
|
if (m->dirty)
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
2001-05-24 07:22:27 +00:00
|
|
|
vm_page_free(m);
|
2002-03-10 21:52:48 +00:00
|
|
|
return (1);
|
2001-05-24 07:22:27 +00:00
|
|
|
}
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
|
|
|
* vm_page_cache
|
|
|
|
*
|
1999-01-21 08:29:12 +00:00
|
|
|
* Put the specified page onto the page cache queue (if appropriate).
|
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* The object and page must be locked.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_cache(vm_page_t m)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
{
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
vm_object_t object;
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
boolean_t cache_was_empty;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2010-05-02 23:33:10 +00:00
|
|
|
vm_page_lock_assert(m, MA_OWNED);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
object = m->object;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (vm_page_busied(m) || (m->oflags & VPO_UNMANAGED) ||
|
2010-05-08 20:34:01 +00:00
|
|
|
m->hold_count || m->wire_count)
|
2007-06-16 21:07:51 +00:00
|
|
|
panic("vm_page_cache: attempting to cache busy page");
|
2012-11-01 16:20:02 +00:00
|
|
|
KASSERT(!pmap_page_is_mapped(m),
|
|
|
|
("vm_page_cache: page %p is mapped", m));
|
|
|
|
KASSERT(m->dirty == 0, ("vm_page_cache: page %p is dirty", m));
|
2007-11-05 10:25:12 +00:00
|
|
|
if (m->valid == 0 || object->type == OBJT_DEFAULT ||
|
|
|
|
(object->type == OBJT_SWAP &&
|
|
|
|
!vm_pager_has_page(object, m->pindex, NULL, NULL))) {
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
/*
|
2014-08-29 21:20:36 +00:00
|
|
|
* Hypothesis: A cache-eligible page belonging to a
|
2007-11-05 10:25:12 +00:00
|
|
|
* default object or swap object but without a backing
|
|
|
|
* store must be zero filled.
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
*/
|
|
|
|
vm_page_free(m);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
}
|
|
|
|
KASSERT((m->flags & PG_CACHED) == 0,
|
|
|
|
("vm_page_cache: page %p is already cached", m));
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
/*
|
|
|
|
* Remove the page from the paging queues.
|
|
|
|
*/
|
2012-11-13 02:50:39 +00:00
|
|
|
vm_page_remque(m);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove the page from the object's collection of resident
|
|
|
|
* pages.
|
|
|
|
*/
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
vm_radix_remove(&object->rtree, m->pindex);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
TAILQ_REMOVE(&object->memq, m, listq);
|
|
|
|
object->resident_page_count--;
|
|
|
|
|
2009-07-12 23:31:20 +00:00
|
|
|
/*
|
|
|
|
* Restore the default memory attribute to the page.
|
|
|
|
*/
|
|
|
|
if (pmap_page_get_memattr(m) != VM_MEMATTR_DEFAULT)
|
|
|
|
pmap_page_set_memattr(m, VM_MEMATTR_DEFAULT);
|
|
|
|
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
/*
|
|
|
|
* Insert the page into the object's collection of cached pages
|
|
|
|
* and the physical memory allocator's cache/free page queues.
|
|
|
|
*/
|
2010-05-08 20:34:01 +00:00
|
|
|
m->flags &= ~PG_ZERO;
|
2007-02-07 06:37:30 +00:00
|
|
|
mtx_lock(&vm_page_queue_free_mtx);
|
2013-08-09 11:28:55 +00:00
|
|
|
cache_was_empty = vm_radix_is_empty(&object->cache);
|
|
|
|
if (vm_radix_insert(&object->cache, m)) {
|
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
|
|
|
if (object->resident_page_count == 0)
|
|
|
|
vdrop(object->handle);
|
|
|
|
m->object = NULL;
|
|
|
|
vm_page_free(m);
|
|
|
|
return;
|
|
|
|
}
|
2013-08-23 17:27:12 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The above call to vm_radix_insert() could reclaim the one pre-
|
|
|
|
* existing cached page from this object, resulting in a call to
|
|
|
|
* vdrop().
|
|
|
|
*/
|
|
|
|
if (!cache_was_empty)
|
|
|
|
cache_was_empty = vm_radix_is_singleton(&object->cache);
|
|
|
|
|
2008-01-02 04:43:47 +00:00
|
|
|
m->flags |= PG_CACHED;
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_cache_count++;
|
2013-08-09 11:28:55 +00:00
|
|
|
PCPU_INC(cnt.v_tcached);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
if (!vm_reserv_free_page(m)) {
|
|
|
|
#else
|
|
|
|
if (TRUE) {
|
|
|
|
#endif
|
|
|
|
vm_phys_set_pool(VM_FREEPOOL_CACHE, m, 0);
|
|
|
|
vm_phys_free_pages(m, 0);
|
|
|
|
}
|
1996-06-16 20:37:31 +00:00
|
|
|
vm_page_free_wakeup();
|
2007-02-07 06:37:30 +00:00
|
|
|
mtx_unlock(&vm_page_queue_free_mtx);
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment the vnode's hold count if this is the object's only
|
|
|
|
* cached page. Decrement the vnode's hold count if this was
|
|
|
|
* the object's only resident page.
|
|
|
|
*/
|
|
|
|
if (object->type == OBJT_VNODE) {
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
if (cache_was_empty && object->resident_page_count != 0)
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
vhold(object->handle);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
else if (!cache_was_empty && object->resident_page_count == 0)
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
vdrop(object->handle);
|
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
|
|
|
|
1999-09-17 04:56:40 +00:00
|
|
|
/*
|
2013-06-10 01:48:21 +00:00
|
|
|
* vm_page_advise
|
1999-09-17 04:56:40 +00:00
|
|
|
*
|
|
|
|
* Cache, deactivate, or do nothing as appropriate. This routine
|
2013-06-10 01:48:21 +00:00
|
|
|
* is used by madvise().
|
1999-09-17 04:56:40 +00:00
|
|
|
*
|
|
|
|
* Generally speaking we want to move the page into the cache so
|
|
|
|
* it gets reused quickly. However, this can result in a silly syndrome
|
|
|
|
* due to the page recycling too quickly. Small objects will not be
|
2013-06-10 01:48:21 +00:00
|
|
|
* fully cached. On the other hand, if we move the page to the inactive
|
1999-09-17 04:56:40 +00:00
|
|
|
* queue we wind up with a problem whereby very large objects
|
|
|
|
* unnecessarily blow away our inactive and cache queues.
|
|
|
|
*
|
|
|
|
* The solution is to move the pages based on a fixed weighting. We
|
|
|
|
* either leave them alone, deactivate them, or move them to the cache,
|
|
|
|
* where moving them to the cache has the highest weighting.
|
|
|
|
* By forcing some pages into other queues we eventually force the
|
|
|
|
* system to balance the queues, potentially recovering other unrelated
|
|
|
|
* space from active. The idea is to not force this to happen too
|
|
|
|
* often.
|
2012-10-03 05:06:45 +00:00
|
|
|
*
|
|
|
|
* The object and page must be locked.
|
1999-09-17 04:56:40 +00:00
|
|
|
*/
|
|
|
|
void
|
2013-06-10 01:48:21 +00:00
|
|
|
vm_page_advise(vm_page_t m, int advice)
|
1999-09-17 04:56:40 +00:00
|
|
|
{
|
2013-06-10 01:48:21 +00:00
|
|
|
int dnw, head;
|
1999-09-17 04:56:40 +00:00
|
|
|
|
2013-06-10 01:48:21 +00:00
|
|
|
vm_page_assert_locked(m);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2013-06-10 01:48:21 +00:00
|
|
|
if (advice == MADV_FREE) {
|
|
|
|
/*
|
|
|
|
* Mark the page clean. This will allow the page to be freed
|
|
|
|
* up by the system. However, such pages are often reused
|
|
|
|
* quickly by malloc() so we do not do anything that would
|
|
|
|
* cause a page fault if we can help it.
|
|
|
|
*
|
|
|
|
* Specifically, we do not try to actually free the page now
|
|
|
|
* nor do we try to put it in the cache (which would cause a
|
|
|
|
* page fault on reuse).
|
|
|
|
*
|
|
|
|
* But we do make the page is freeable as we can without
|
|
|
|
* actually taking the step of unmapping it.
|
|
|
|
*/
|
|
|
|
m->dirty = 0;
|
|
|
|
m->act_count = 0;
|
|
|
|
} else if (advice != MADV_DONTNEED)
|
|
|
|
return;
|
Roughly half of a typical pmap_mincore() implementation is machine-
independent code. Move this code into mincore(), and eliminate the
page queues lock from pmap_mincore().
Push down the page queues lock into pmap_clear_modify(),
pmap_clear_reference(), and pmap_is_modified(). Assert that these
functions are never passed an unmanaged page.
Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m:
Contrary to what the comment says, pmap_mincore() is not simply an
optimization. Without a complete pmap_mincore() implementation,
mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED
because only the pmap can provide this information.
Eliminate the page queues lock from vfs_setdirty_locked_object(),
vm_pageout_clean(), vm_object_page_collect_flush(), and
vm_object_page_clean(). Generally speaking, these are all accesses
to the page's dirty field, which are synchronized by the containing
vm object's lock.
Reduce the scope of the page queues lock in vm_object_madvise() and
vm_page_dontneed().
Reviewed by: kib (an earlier version)
2010-05-24 14:26:57 +00:00
|
|
|
dnw = PCPU_GET(dnweight);
|
|
|
|
PCPU_INC(dnweight);
|
1999-09-17 04:56:40 +00:00
|
|
|
|
|
|
|
/*
|
2010-05-26 18:00:44 +00:00
|
|
|
* Occasionally leave the page alone.
|
1999-09-17 04:56:40 +00:00
|
|
|
*/
|
2010-07-02 15:02:51 +00:00
|
|
|
if ((dnw & 0x01F0) == 0 || m->queue == PQ_INACTIVE) {
|
1999-09-17 04:56:40 +00:00
|
|
|
if (m->act_count >= ACT_INIT)
|
|
|
|
--m->act_count;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
Essentially, neither madvise(..., MADV_DONTNEED) nor madvise(..., MADV_FREE)
work. (Moreover, I don't believe that they have ever worked as intended.)
The explanation is fairly simple. Both MADV_DONTNEED and MADV_FREE perform
vm_page_dontneed() on each page within the range given to madvise(). This
function moves the page to the inactive queue. Specifically, if the page is
clean, it is moved to the head of the inactive queue where it is first in
line for processing by the page daemon. On the other hand, if it is dirty,
it is placed at the tail. Let's further examine the case in which the page
is clean. Recall that the page is at the head of the line for processing by
the page daemon. The expectation of vm_page_dontneed()'s author was that
the page would be transferred from the inactive queue to the cache queue by
the page daemon. (Once the page is in the cache queue, it is, in effect,
free, that is, it can be reallocated to a new vm object by vm_page_alloc()
if it isn't reactivated quickly enough by a user of the old vm object.) The
trouble is that nowhere in the execution of either MADV_DONTNEED or
MADV_FREE is either the machine-independent reference flag (PG_REFERENCED)
or the reference bit in any page table entry (PTE) mapping the page cleared.
Consequently, the immediate reaction of the page daemon is to reactivate the
page because it is referenced. In effect, the madvise() was for naught.
The case in which the page was dirty is not too different. Instead of being
laundered, the page is reactivated.
Note: The essential difference between MADV_DONTNEED and MADV_FREE is
that MADV_FREE clears a page's dirty field. So, MADV_FREE is always
executing the clean case above.
This revision changes vm_page_dontneed() to clear both the machine-
independent reference flag (PG_REFERENCED) and the reference bit in all PTEs
mapping the page.
MFC after: 6 weeks
2008-06-06 18:38:43 +00:00
|
|
|
/*
|
|
|
|
* Clear any references to the page. Otherwise, the page daemon will
|
|
|
|
* immediately reactivate the page.
|
|
|
|
*/
|
2011-09-06 10:30:11 +00:00
|
|
|
vm_page_aflag_clear(m, PGA_REFERENCED);
|
Essentially, neither madvise(..., MADV_DONTNEED) nor madvise(..., MADV_FREE)
work. (Moreover, I don't believe that they have ever worked as intended.)
The explanation is fairly simple. Both MADV_DONTNEED and MADV_FREE perform
vm_page_dontneed() on each page within the range given to madvise(). This
function moves the page to the inactive queue. Specifically, if the page is
clean, it is moved to the head of the inactive queue where it is first in
line for processing by the page daemon. On the other hand, if it is dirty,
it is placed at the tail. Let's further examine the case in which the page
is clean. Recall that the page is at the head of the line for processing by
the page daemon. The expectation of vm_page_dontneed()'s author was that
the page would be transferred from the inactive queue to the cache queue by
the page daemon. (Once the page is in the cache queue, it is, in effect,
free, that is, it can be reallocated to a new vm object by vm_page_alloc()
if it isn't reactivated quickly enough by a user of the old vm object.) The
trouble is that nowhere in the execution of either MADV_DONTNEED or
MADV_FREE is either the machine-independent reference flag (PG_REFERENCED)
or the reference bit in any page table entry (PTE) mapping the page cleared.
Consequently, the immediate reaction of the page daemon is to reactivate the
page because it is referenced. In effect, the madvise() was for naught.
The case in which the page was dirty is not too different. Instead of being
laundered, the page is reactivated.
Note: The essential difference between MADV_DONTNEED and MADV_FREE is
that MADV_FREE clears a page's dirty field. So, MADV_FREE is always
executing the clean case above.
This revision changes vm_page_dontneed() to clear both the machine-
independent reference flag (PG_REFERENCED) and the reference bit in all PTEs
mapping the page.
MFC after: 6 weeks
2008-06-06 18:38:43 +00:00
|
|
|
|
2013-06-10 01:48:21 +00:00
|
|
|
if (advice != MADV_FREE && m->dirty == 0 && pmap_is_modified(m))
|
2004-02-19 07:43:55 +00:00
|
|
|
vm_page_dirty(m);
|
1999-09-17 04:56:40 +00:00
|
|
|
|
|
|
|
if (m->dirty || (dnw & 0x0070) == 0) {
|
|
|
|
/*
|
|
|
|
* Deactivate the page 3 times out of 32.
|
|
|
|
*/
|
|
|
|
head = 0;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Cache the page 28 times out of every 32. Note that
|
|
|
|
* the page is deactivated instead of cached, but placed
|
|
|
|
* at the head of the queue instead of the tail.
|
|
|
|
*/
|
|
|
|
head = 1;
|
|
|
|
}
|
|
|
|
_vm_page_deactivate(m, head);
|
|
|
|
}
|
|
|
|
|
1998-02-05 03:32:49 +00:00
|
|
|
/*
|
|
|
|
* Grab a page, waiting until we are waken up due to the page
|
|
|
|
* changing state. We keep on waiting, if the page continues
|
2004-04-24 21:36:23 +00:00
|
|
|
* to be in the object. If the page doesn't exist, first allocate it
|
|
|
|
* and then conditionally zero it.
|
1998-12-23 01:52:47 +00:00
|
|
|
*
|
2012-10-03 05:06:45 +00:00
|
|
|
* This routine may sleep.
|
|
|
|
*
|
|
|
|
* The object must be locked on entry. The lock will, however, be released
|
|
|
|
* and reacquired if the routine sleeps.
|
1998-02-05 03:32:49 +00:00
|
|
|
*/
|
|
|
|
vm_page_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_grab(vm_object_t object, vm_pindex_t pindex, int allocflags)
|
1998-02-05 03:32:49 +00:00
|
|
|
{
|
|
|
|
vm_page_t m;
|
2013-08-09 11:11:11 +00:00
|
|
|
int sleep;
|
1998-02-05 03:32:49 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2013-08-09 11:11:11 +00:00
|
|
|
KASSERT((allocflags & VM_ALLOC_SBUSY) == 0 ||
|
|
|
|
(allocflags & VM_ALLOC_IGN_SBUSY) != 0,
|
|
|
|
("vm_page_grab: VM_ALLOC_SBUSY/VM_ALLOC_IGN_SBUSY mismatch"));
|
1998-02-05 03:32:49 +00:00
|
|
|
retrylookup:
|
|
|
|
if ((m = vm_page_lookup(object, pindex)) != NULL) {
|
2013-08-09 11:11:11 +00:00
|
|
|
sleep = (allocflags & VM_ALLOC_IGN_SBUSY) != 0 ?
|
|
|
|
vm_page_xbusied(m) : vm_page_busied(m);
|
|
|
|
if (sleep) {
|
2014-12-22 09:02:21 +00:00
|
|
|
if ((allocflags & VM_ALLOC_NOWAIT) != 0)
|
|
|
|
return (NULL);
|
2010-07-08 08:37:51 +00:00
|
|
|
/*
|
|
|
|
* Reference the page before unlocking and
|
|
|
|
* sleeping so that the page daemon is less
|
|
|
|
* likely to reclaim it.
|
|
|
|
*/
|
2011-09-06 10:30:11 +00:00
|
|
|
vm_page_aflag_set(m, PGA_REFERENCED);
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_lock(m);
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
|
|
|
vm_page_busy_sleep(m, "pgrbwt");
|
|
|
|
VM_OBJECT_WLOCK(object);
|
1998-02-05 03:32:49 +00:00
|
|
|
goto retrylookup;
|
|
|
|
} else {
|
2006-10-22 21:18:48 +00:00
|
|
|
if ((allocflags & VM_ALLOC_WIRED) != 0) {
|
2010-05-03 17:55:32 +00:00
|
|
|
vm_page_lock(m);
|
2002-07-28 23:46:19 +00:00
|
|
|
vm_page_wire(m);
|
2010-05-03 17:55:32 +00:00
|
|
|
vm_page_unlock(m);
|
2006-10-22 21:18:48 +00:00
|
|
|
}
|
2013-08-09 11:11:11 +00:00
|
|
|
if ((allocflags &
|
|
|
|
(VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) == 0)
|
|
|
|
vm_page_xbusy(m);
|
|
|
|
if ((allocflags & VM_ALLOC_SBUSY) != 0)
|
|
|
|
vm_page_sbusy(m);
|
2004-04-24 21:36:23 +00:00
|
|
|
return (m);
|
1998-02-05 03:32:49 +00:00
|
|
|
}
|
|
|
|
}
|
2014-12-22 09:00:47 +00:00
|
|
|
m = vm_page_alloc(object, pindex, allocflags);
|
1998-02-05 03:32:49 +00:00
|
|
|
if (m == NULL) {
|
2014-12-22 09:02:21 +00:00
|
|
|
if ((allocflags & VM_ALLOC_NOWAIT) != 0)
|
|
|
|
return (NULL);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
1998-02-05 03:32:49 +00:00
|
|
|
VM_WAIT;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
1998-02-05 03:32:49 +00:00
|
|
|
goto retrylookup;
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
} else if (m->valid != 0)
|
|
|
|
return (m);
|
2004-04-24 20:53:55 +00:00
|
|
|
if (allocflags & VM_ALLOC_ZERO && (m->flags & PG_ZERO) == 0)
|
|
|
|
pmap_zero_page(m);
|
2004-04-24 21:36:23 +00:00
|
|
|
return (m);
|
1998-02-05 03:32:49 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
|
|
|
/*
|
2012-10-03 05:06:45 +00:00
|
|
|
* Mapping function for valid or dirty bits in a page.
|
1999-04-05 19:38:30 +00:00
|
|
|
*
|
|
|
|
* Inputs are required to range within a page.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
2011-11-05 08:20:32 +00:00
|
|
|
vm_page_bits_t
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_page_bits(int base, int size)
|
|
|
|
{
|
1999-04-05 19:38:30 +00:00
|
|
|
int first_bit;
|
|
|
|
int last_bit;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
1999-04-05 19:38:30 +00:00
|
|
|
KASSERT(
|
|
|
|
base + size <= PAGE_SIZE,
|
|
|
|
("vm_page_bits: illegal base/size %d/%d", base, size)
|
|
|
|
);
|
Some VM improvements, including elimination of alot of Sig-11
problems. Tor Egge and others have helped with various VM bugs
lately, but don't blame him -- blame me!!!
pmap.c:
1) Create an object for kernel page table allocations. This
fixes a bogus allocation method previously used for such, by
grabbing pages from the kernel object, using bogus pindexes.
(This was a code cleanup, and perhaps a minor system stability
issue.)
pmap.c:
2) Pre-set the modify and accessed bits when prudent. This will
decrease bus traffic under certain circumstances.
vfs_bio.c, vfs_cluster.c:
3) Rather than calculating the beginning virtual byte offset
multiple times, stick the offset into the buffer header, so
that the calculated offset can be reused. (Long long multiplies
are often expensive, and this is a probably unmeasurable performance
improvement, and code cleanup.)
vfs_bio.c:
4) Handle write recursion more intelligently (but not perfectly) so
that it is less likely to cause a system panic, and is also
much more robust.
vfs_bio.c:
5) getblk incorrectly wrote out blocks that are incorrectly sized.
The problem is fixed, and writes blocks out ONLY when B_DELWRI
is true.
vfs_bio.c:
6) Check that already constituted buffers have fully valid pages. If
not, then make sure that the B_CACHE bit is not set. (This was
a major source of Sig-11 type problems.)
vfs_bio.c:
7) Fix a potential system deadlock due to an incorrectly specified
sleep priority while waiting for a buffer write operation. The
change that I made opens the system up to serious problems, and
we need to examine the issue of process sleep priorities.
vfs_cluster.c, vfs_bio.c:
8) Make clustered reads work more correctly (and more completely)
when buffers are already constituted, but not fully valid.
(This was another system reliability issue.)
vfs_subr.c, ffs_inode.c:
9) Create a vtruncbuf function, which is used by filesystems that
can truncate files. The vinvalbuf forced a file sync type operation,
while vtruncbuf only invalidates the buffers past the new end of file,
and also invalidates the appropriate pages. (This was a system reliabiliy
and performance issue.)
10) Modify FFS to use vtruncbuf.
vm_object.c:
11) Make the object rundown mechanism for OBJT_VNODE type objects work
more correctly. Included in that fix, create pager entries for
the OBJT_DEAD pager type, so that paging requests that might slip
in during race conditions are properly handled. (This was a system
reliability issue.)
vm_page.c:
12) Make some of the page validation routines be a little less picky
about arguments passed to them. Also, support page invalidation
change the object generation count so that we handle generation
counts a little more robustly.
vm_pageout.c:
13) Further reduce pageout daemon activity when the system doesn't
need help from it. There should be no additional performance
decrease even when the pageout daemon is running. (This was
a significant performance issue.)
vnode_pager.c:
14) Teach the vnode pager to handle race conditions during vnode
deallocations.
1998-03-16 01:56:03 +00:00
|
|
|
|
1999-04-05 19:38:30 +00:00
|
|
|
if (size == 0) /* handle degenerate case */
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
Some VM improvements, including elimination of alot of Sig-11
problems. Tor Egge and others have helped with various VM bugs
lately, but don't blame him -- blame me!!!
pmap.c:
1) Create an object for kernel page table allocations. This
fixes a bogus allocation method previously used for such, by
grabbing pages from the kernel object, using bogus pindexes.
(This was a code cleanup, and perhaps a minor system stability
issue.)
pmap.c:
2) Pre-set the modify and accessed bits when prudent. This will
decrease bus traffic under certain circumstances.
vfs_bio.c, vfs_cluster.c:
3) Rather than calculating the beginning virtual byte offset
multiple times, stick the offset into the buffer header, so
that the calculated offset can be reused. (Long long multiplies
are often expensive, and this is a probably unmeasurable performance
improvement, and code cleanup.)
vfs_bio.c:
4) Handle write recursion more intelligently (but not perfectly) so
that it is less likely to cause a system panic, and is also
much more robust.
vfs_bio.c:
5) getblk incorrectly wrote out blocks that are incorrectly sized.
The problem is fixed, and writes blocks out ONLY when B_DELWRI
is true.
vfs_bio.c:
6) Check that already constituted buffers have fully valid pages. If
not, then make sure that the B_CACHE bit is not set. (This was
a major source of Sig-11 type problems.)
vfs_bio.c:
7) Fix a potential system deadlock due to an incorrectly specified
sleep priority while waiting for a buffer write operation. The
change that I made opens the system up to serious problems, and
we need to examine the issue of process sleep priorities.
vfs_cluster.c, vfs_bio.c:
8) Make clustered reads work more correctly (and more completely)
when buffers are already constituted, but not fully valid.
(This was another system reliability issue.)
vfs_subr.c, ffs_inode.c:
9) Create a vtruncbuf function, which is used by filesystems that
can truncate files. The vinvalbuf forced a file sync type operation,
while vtruncbuf only invalidates the buffers past the new end of file,
and also invalidates the appropriate pages. (This was a system reliabiliy
and performance issue.)
10) Modify FFS to use vtruncbuf.
vm_object.c:
11) Make the object rundown mechanism for OBJT_VNODE type objects work
more correctly. Included in that fix, create pager entries for
the OBJT_DEAD pager type, so that paging requests that might slip
in during race conditions are properly handled. (This was a system
reliability issue.)
vm_page.c:
12) Make some of the page validation routines be a little less picky
about arguments passed to them. Also, support page invalidation
change the object generation count so that we handle generation
counts a little more robustly.
vm_pageout.c:
13) Further reduce pageout daemon activity when the system doesn't
need help from it. There should be no additional performance
decrease even when the pageout daemon is running. (This was
a significant performance issue.)
vnode_pager.c:
14) Teach the vnode pager to handle race conditions during vnode
deallocations.
1998-03-16 01:56:03 +00:00
|
|
|
|
1999-04-05 19:38:30 +00:00
|
|
|
first_bit = base >> DEV_BSHIFT;
|
|
|
|
last_bit = (base + size - 1) >> DEV_BSHIFT;
|
|
|
|
|
2011-11-05 08:20:32 +00:00
|
|
|
return (((vm_page_bits_t)2 << last_bit) -
|
|
|
|
((vm_page_bits_t)1 << first_bit));
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
|
|
|
|
2009-05-13 05:39:39 +00:00
|
|
|
/*
|
2011-11-30 17:39:00 +00:00
|
|
|
* vm_page_set_valid_range:
|
2009-05-13 05:39:39 +00:00
|
|
|
*
|
|
|
|
* Sets portions of a page valid. The arguments are expected
|
|
|
|
* to be DEV_BSIZE aligned but if they aren't the bitmap is inclusive
|
|
|
|
* of any partial chunks touched by the range. The invalid portion of
|
|
|
|
* such chunks will be zeroed.
|
|
|
|
*
|
|
|
|
* (base + size) must be less then or equal to PAGE_SIZE.
|
|
|
|
*/
|
|
|
|
void
|
2011-11-30 17:39:00 +00:00
|
|
|
vm_page_set_valid_range(vm_page_t m, int base, int size)
|
2009-05-13 05:39:39 +00:00
|
|
|
{
|
|
|
|
int endoff, frag;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2009-05-13 05:39:39 +00:00
|
|
|
if (size == 0) /* handle degenerate case */
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the base is not DEV_BSIZE aligned and the valid
|
|
|
|
* bit is clear, we have to zero out a portion of the
|
|
|
|
* first block.
|
|
|
|
*/
|
|
|
|
if ((frag = base & ~(DEV_BSIZE - 1)) != base &&
|
|
|
|
(m->valid & (1 << (base >> DEV_BSHIFT))) == 0)
|
|
|
|
pmap_zero_page_area(m, frag, base - frag);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the ending offset is not DEV_BSIZE aligned and the
|
|
|
|
* valid bit is clear, we have to zero out a portion of
|
|
|
|
* the last block.
|
|
|
|
*/
|
|
|
|
endoff = base + size;
|
|
|
|
if ((frag = endoff & ~(DEV_BSIZE - 1)) != endoff &&
|
|
|
|
(m->valid & (1 << (endoff >> DEV_BSHIFT))) == 0)
|
|
|
|
pmap_zero_page_area(m, endoff,
|
|
|
|
DEV_BSIZE - (endoff & (DEV_BSIZE - 1)));
|
|
|
|
|
2009-05-30 22:06:58 +00:00
|
|
|
/*
|
|
|
|
* Assert that no previously invalid block that is now being validated
|
|
|
|
* is already dirty.
|
|
|
|
*/
|
|
|
|
KASSERT((~m->valid & vm_page_bits(base, size) & m->dirty) == 0,
|
2011-11-30 17:39:00 +00:00
|
|
|
("vm_page_set_valid_range: page %p is dirty", m));
|
2009-05-30 22:06:58 +00:00
|
|
|
|
2009-05-13 05:39:39 +00:00
|
|
|
/*
|
|
|
|
* Set valid bits inclusive of any overlap.
|
|
|
|
*/
|
|
|
|
m->valid |= vm_page_bits(base, size);
|
|
|
|
}
|
|
|
|
|
2010-06-02 15:46:37 +00:00
|
|
|
/*
|
|
|
|
* Clear the given bits from the specified page's dirty field.
|
|
|
|
*/
|
|
|
|
static __inline void
|
2011-11-05 08:20:32 +00:00
|
|
|
vm_page_clear_dirty_mask(vm_page_t m, vm_page_bits_t pagebits)
|
2010-06-02 15:46:37 +00:00
|
|
|
{
|
2011-09-28 14:57:50 +00:00
|
|
|
uintptr_t addr;
|
|
|
|
#if PAGE_SIZE < 16384
|
|
|
|
int shift;
|
|
|
|
#endif
|
2010-06-02 15:46:37 +00:00
|
|
|
|
|
|
|
/*
|
2013-08-09 11:11:11 +00:00
|
|
|
* If the object is locked and the page is neither exclusive busy nor
|
2012-06-16 18:56:19 +00:00
|
|
|
* write mapped, then the page's dirty field cannot possibly be
|
2011-09-28 14:57:50 +00:00
|
|
|
* set by a concurrent pmap operation.
|
2010-06-02 15:46:37 +00:00
|
|
|
*/
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (!vm_page_xbusied(m) && !pmap_page_is_write_mapped(m))
|
2010-06-02 15:46:37 +00:00
|
|
|
m->dirty &= ~pagebits;
|
|
|
|
else {
|
2011-06-19 19:13:24 +00:00
|
|
|
/*
|
2011-09-28 14:57:50 +00:00
|
|
|
* The pmap layer can call vm_page_dirty() without
|
|
|
|
* holding a distinguished lock. The combination of
|
|
|
|
* the object's lock and an atomic operation suffice
|
|
|
|
* to guarantee consistency of the page dirty field.
|
|
|
|
*
|
|
|
|
* For PAGE_SIZE == 32768 case, compiler already
|
|
|
|
* properly aligns the dirty field, so no forcible
|
|
|
|
* alignment is needed. Only require existence of
|
2011-09-28 16:12:15 +00:00
|
|
|
* atomic_clear_64 when page size is 32768.
|
2011-06-19 19:13:24 +00:00
|
|
|
*/
|
2011-09-28 14:57:50 +00:00
|
|
|
addr = (uintptr_t)&m->dirty;
|
|
|
|
#if PAGE_SIZE == 32768
|
|
|
|
atomic_clear_64((uint64_t *)addr, pagebits);
|
2011-06-19 19:13:24 +00:00
|
|
|
#elif PAGE_SIZE == 16384
|
2011-09-28 14:57:50 +00:00
|
|
|
atomic_clear_32((uint32_t *)addr, pagebits);
|
|
|
|
#else /* PAGE_SIZE <= 8192 */
|
2011-06-19 19:13:24 +00:00
|
|
|
/*
|
2011-09-28 16:12:15 +00:00
|
|
|
* Use a trick to perform a 32-bit atomic on the
|
|
|
|
* containing aligned word, to not depend on the existence
|
|
|
|
* of atomic_clear_{8, 16}.
|
2011-06-19 19:13:24 +00:00
|
|
|
*/
|
2011-09-28 14:57:50 +00:00
|
|
|
shift = addr & (sizeof(uint32_t) - 1);
|
|
|
|
#if BYTE_ORDER == BIG_ENDIAN
|
|
|
|
shift = (sizeof(uint32_t) - sizeof(m->dirty) - shift) * NBBY;
|
|
|
|
#else
|
|
|
|
shift *= NBBY;
|
2011-06-19 19:13:24 +00:00
|
|
|
#endif
|
2011-09-28 14:57:50 +00:00
|
|
|
addr &= ~(sizeof(uint32_t) - 1);
|
|
|
|
atomic_clear_32((uint32_t *)addr, pagebits << shift);
|
|
|
|
#endif /* PAGE_SIZE */
|
2010-06-02 15:46:37 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1995-09-03 19:57:25 +00:00
|
|
|
/*
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
* vm_page_set_validclean:
|
1999-04-05 19:38:30 +00:00
|
|
|
*
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
* Sets portions of a page valid and clean. The arguments are expected
|
|
|
|
* to be DEV_BSIZE aligned but if they aren't the bitmap is inclusive
|
|
|
|
* of any partial chunks touched by the range. The invalid portion of
|
|
|
|
* such chunks will be zero'd.
|
1999-04-05 19:38:30 +00:00
|
|
|
*
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
* (base + size) must be less then or equal to PAGE_SIZE.
|
1995-09-03 19:57:25 +00:00
|
|
|
*/
|
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_set_validclean(vm_page_t m, int base, int size)
|
1995-09-03 19:57:25 +00:00
|
|
|
{
|
2011-11-05 08:20:32 +00:00
|
|
|
vm_page_bits_t oldvalid, pagebits;
|
|
|
|
int endoff, frag;
|
1999-04-05 19:38:30 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
1999-04-05 19:38:30 +00:00
|
|
|
if (size == 0) /* handle degenerate case */
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the base is not DEV_BSIZE aligned and the valid
|
|
|
|
* bit is clear, we have to zero out a portion of the
|
|
|
|
* first block.
|
|
|
|
*/
|
|
|
|
if ((frag = base & ~(DEV_BSIZE - 1)) != base &&
|
2011-11-05 08:20:32 +00:00
|
|
|
(m->valid & ((vm_page_bits_t)1 << (base >> DEV_BSHIFT))) == 0)
|
2002-04-15 16:00:03 +00:00
|
|
|
pmap_zero_page_area(m, frag, base - frag);
|
1999-04-05 19:38:30 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the ending offset is not DEV_BSIZE aligned and the
|
|
|
|
* valid bit is clear, we have to zero out a portion of
|
|
|
|
* the last block.
|
|
|
|
*/
|
|
|
|
endoff = base + size;
|
|
|
|
if ((frag = endoff & ~(DEV_BSIZE - 1)) != endoff &&
|
2011-11-05 08:20:32 +00:00
|
|
|
(m->valid & ((vm_page_bits_t)1 << (endoff >> DEV_BSHIFT))) == 0)
|
2002-04-15 16:00:03 +00:00
|
|
|
pmap_zero_page_area(m, endoff,
|
|
|
|
DEV_BSIZE - (endoff & (DEV_BSIZE - 1)));
|
1999-04-05 19:38:30 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set valid, clear dirty bits. If validating the entire
|
1999-12-12 03:19:33 +00:00
|
|
|
* page we can safely clear the pmap modify bit. We also
|
2006-08-13 00:11:09 +00:00
|
|
|
* use this opportunity to clear the VPO_NOSYNC flag. If a process
|
1999-12-12 03:19:33 +00:00
|
|
|
* takes a write fault on a MAP_NOSYNC memory area the flag will
|
|
|
|
* be set again.
|
2001-12-14 01:16:57 +00:00
|
|
|
*
|
|
|
|
* We set valid bits inclusive of any overlap, but we can only
|
|
|
|
* clear dirty bits for DEV_BSIZE chunks that are fully within
|
|
|
|
* the range.
|
1999-04-05 19:38:30 +00:00
|
|
|
*/
|
2010-06-02 15:46:37 +00:00
|
|
|
oldvalid = m->valid;
|
1999-04-05 19:38:30 +00:00
|
|
|
pagebits = vm_page_bits(base, size);
|
1995-09-03 19:57:25 +00:00
|
|
|
m->valid |= pagebits;
|
2001-12-14 01:16:57 +00:00
|
|
|
#if 0 /* NOT YET */
|
|
|
|
if ((frag = base & (DEV_BSIZE - 1)) != 0) {
|
|
|
|
frag = DEV_BSIZE - frag;
|
|
|
|
base += frag;
|
|
|
|
size -= frag;
|
|
|
|
if (size < 0)
|
|
|
|
size = 0;
|
|
|
|
}
|
|
|
|
pagebits = vm_page_bits(base, size & (DEV_BSIZE - 1));
|
|
|
|
#endif
|
1999-12-12 03:19:33 +00:00
|
|
|
if (base == 0 && size == PAGE_SIZE) {
|
2010-06-02 15:46:37 +00:00
|
|
|
/*
|
|
|
|
* The page can only be modified within the pmap if it is
|
|
|
|
* mapped, and it can only be mapped if it was previously
|
|
|
|
* fully valid.
|
|
|
|
*/
|
|
|
|
if (oldvalid == VM_PAGE_BITS_ALL)
|
|
|
|
/*
|
|
|
|
* Perform the pmap_clear_modify() first. Otherwise,
|
|
|
|
* a concurrent pmap operation, such as
|
|
|
|
* pmap_protect(), could clear a modification in the
|
|
|
|
* pmap and set the dirty field on the page before
|
|
|
|
* pmap_clear_modify() had begun and after the dirty
|
|
|
|
* field was cleared here.
|
|
|
|
*/
|
|
|
|
pmap_clear_modify(m);
|
|
|
|
m->dirty = 0;
|
2006-08-13 00:11:09 +00:00
|
|
|
m->oflags &= ~VPO_NOSYNC;
|
2010-06-02 15:46:37 +00:00
|
|
|
} else if (oldvalid != VM_PAGE_BITS_ALL)
|
|
|
|
m->dirty &= ~pagebits;
|
|
|
|
else
|
|
|
|
vm_page_clear_dirty_mask(m, pagebits);
|
1995-09-03 19:57:25 +00:00
|
|
|
}
|
|
|
|
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_clear_dirty(vm_page_t m, int base, int size)
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
{
|
2003-08-23 18:11:53 +00:00
|
|
|
|
2010-06-02 15:46:37 +00:00
|
|
|
vm_page_clear_dirty_mask(m, vm_page_bits(base, size));
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
}
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
* vm_page_set_invalid:
|
|
|
|
*
|
|
|
|
* Invalidates DEV_BSIZE'd chunks within a page. Both the
|
|
|
|
* valid and dirty bits for the effected areas are cleared.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_set_invalid(vm_page_t m, int base, int size)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
{
|
2011-11-05 08:20:32 +00:00
|
|
|
vm_page_bits_t bits;
|
2013-09-14 10:11:38 +00:00
|
|
|
vm_object_t object;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2013-09-14 10:11:38 +00:00
|
|
|
object = m->object;
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
if (object->type == OBJT_VNODE && base == 0 && IDX_TO_OFF(m->pindex) +
|
|
|
|
size >= object->un_pager.vnp.vnp_size)
|
|
|
|
bits = VM_PAGE_BITS_ALL;
|
|
|
|
else
|
|
|
|
bits = vm_page_bits(base, size);
|
2006-01-24 07:21:38 +00:00
|
|
|
if (m->valid == VM_PAGE_BITS_ALL && bits != 0)
|
|
|
|
pmap_remove_all(m);
|
2013-09-14 10:11:38 +00:00
|
|
|
KASSERT((bits == 0 && m->valid == VM_PAGE_BITS_ALL) ||
|
|
|
|
!pmap_page_is_mapped(m),
|
2010-05-18 16:40:29 +00:00
|
|
|
("vm_page_set_invalid: page %p is mapped", m));
|
The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS. These hacks have caused no
end of trouble, especially when combined with mmap(). I've removed
them. Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write. NFS does, however,
optimize piecemeal appends to files. For most common file operations,
you will not notice the difference. The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations. NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write. There is quite a bit of room for further
optimization in these areas.
The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault. This
is not correct operation. The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid. A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid. This operation is
necessary to properly support mmap(). The zeroing occurs most often
when dealing with file-EOF situations. Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.
getblk() and allocbuf() have been rewritten. B_CACHE operation is now
formally defined in comments and more straightforward in
implementation. B_CACHE for VMIO buffers is based on the validity of
the backing store. B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa). biodone() is now responsible for setting B_CACHE
when a successful read completes. B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated. VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE. This means that bowrite() and bawrite() also
set B_CACHE indirectly.
There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount. These have been fixed. getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.
Major fixes to NFS/TCP have been made. A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain. The server's kernel must be
recompiled to get the benefit of the fixes.
Submitted by: Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
|
|
|
m->valid &= ~bits;
|
|
|
|
m->dirty &= ~bits;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
1999-04-05 19:38:30 +00:00
|
|
|
* vm_page_zero_invalid()
|
|
|
|
*
|
|
|
|
* The kernel assumes that the invalid portions of a page contain
|
|
|
|
* garbage, but such pages can be mapped into memory by user code.
|
|
|
|
* When this occurs, we must zero out the non-valid portions of the
|
|
|
|
* page so user code sees what it expects.
|
|
|
|
*
|
|
|
|
* Pages are most often semi-valid when the end of a file is mapped
|
|
|
|
* into memory and the file's size is not page aligned.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_page_zero_invalid(vm_page_t m, boolean_t setvalid)
|
|
|
|
{
|
|
|
|
int b;
|
|
|
|
int i;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
1999-04-05 19:38:30 +00:00
|
|
|
/*
|
|
|
|
* Scan the valid bits looking for invalid sections that
|
|
|
|
* must be zerod. Invalid sub-DEV_BSIZE'd areas ( where the
|
|
|
|
* valid bit may be set ) have already been zerod by
|
|
|
|
* vm_page_set_validclean().
|
|
|
|
*/
|
|
|
|
for (b = i = 0; i <= PAGE_SIZE / DEV_BSIZE; ++i) {
|
|
|
|
if (i == (PAGE_SIZE / DEV_BSIZE) ||
|
2011-11-05 08:20:32 +00:00
|
|
|
(m->valid & ((vm_page_bits_t)1 << i))) {
|
1999-04-05 19:38:30 +00:00
|
|
|
if (i > b) {
|
2002-04-15 16:00:03 +00:00
|
|
|
pmap_zero_page_area(m,
|
|
|
|
b << DEV_BSHIFT, (i - b) << DEV_BSHIFT);
|
1999-04-05 19:38:30 +00:00
|
|
|
}
|
|
|
|
b = i + 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* setvalid is TRUE when we can safely set the zero'd areas
|
|
|
|
* as being valid. We can do this if there are no cache consistancy
|
|
|
|
* issues. e.g. it is ok to do with UFS, but not ok to do with NFS.
|
|
|
|
*/
|
|
|
|
if (setvalid)
|
|
|
|
m->valid = VM_PAGE_BITS_ALL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_page_is_valid:
|
|
|
|
*
|
|
|
|
* Is (partial) page valid? Note that the case where size == 0
|
|
|
|
* will return FALSE in the degenerate case where the page is
|
|
|
|
* entirely invalid, and TRUE otherwise.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_is_valid(vm_page_t m, int base, int size)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
{
|
2011-11-05 08:20:32 +00:00
|
|
|
vm_page_bits_t bits;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2013-08-09 11:11:11 +00:00
|
|
|
VM_OBJECT_ASSERT_LOCKED(m->object);
|
2011-11-05 08:20:32 +00:00
|
|
|
bits = vm_page_bits(base, size);
|
2013-03-12 12:20:49 +00:00
|
|
|
return (m->valid != 0 && (m->valid & bits) == bits);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
|
|
|
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
/*
|
|
|
|
* vm_page_ps_is_valid:
|
|
|
|
*
|
|
|
|
* Returns TRUE if the entire (super)page is valid and FALSE otherwise.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
vm_page_ps_is_valid(vm_page_t m)
|
|
|
|
{
|
|
|
|
int i, npages;
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_LOCKED(m->object);
|
|
|
|
npages = atop(pagesizes[m->psind]);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The physically contiguous pages that make up a superpage, i.e., a
|
|
|
|
* page with a page size index ("psind") greater than zero, will
|
|
|
|
* occupy adjacent entries in vm_page_array[].
|
|
|
|
*/
|
|
|
|
for (i = 0; i < npages; i++) {
|
|
|
|
if (m[i].valid != VM_PAGE_BITS_ALL)
|
|
|
|
return (FALSE);
|
|
|
|
}
|
|
|
|
return (TRUE);
|
|
|
|
}
|
|
|
|
|
1998-12-23 01:52:47 +00:00
|
|
|
/*
|
2012-10-03 05:06:45 +00:00
|
|
|
* Set the page's dirty bits if the page is modified.
|
1998-12-23 01:52:47 +00:00
|
|
|
*/
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_page_test_dirty(vm_page_t m)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
{
|
2010-05-18 16:40:29 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2010-05-18 16:40:29 +00:00
|
|
|
if (m->dirty != VM_PAGE_BITS_ALL && pmap_is_modified(m))
|
1999-01-24 06:00:31 +00:00
|
|
|
vm_page_dirty(m);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 13:07:32 +00:00
|
|
|
void
|
|
|
|
vm_page_lock_KBI(vm_page_t m, const char *file, int line)
|
|
|
|
{
|
|
|
|
|
|
|
|
mtx_lock_flags_(vm_page_lockptr(m), 0, file, line);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_page_unlock_KBI(vm_page_t m, const char *file, int line)
|
|
|
|
{
|
|
|
|
|
|
|
|
mtx_unlock_flags_(vm_page_lockptr(m), 0, file, line);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
vm_page_trylock_KBI(vm_page_t m, const char *file, int line)
|
|
|
|
{
|
|
|
|
|
|
|
|
return (mtx_trylock_flags_(vm_page_lockptr(m), 0, file, line));
|
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(INVARIANTS) || defined(INVARIANT_SUPPORT)
|
2013-06-03 01:22:54 +00:00
|
|
|
void
|
|
|
|
vm_page_assert_locked_KBI(vm_page_t m, const char *file, int line)
|
|
|
|
{
|
|
|
|
|
|
|
|
vm_page_lock_assert_KBI(m, MA_OWNED, file, line);
|
|
|
|
}
|
|
|
|
|
2011-11-29 13:07:32 +00:00
|
|
|
void
|
|
|
|
vm_page_lock_assert_KBI(vm_page_t m, int a, const char *file, int line)
|
|
|
|
{
|
|
|
|
|
|
|
|
mtx_assert_(vm_page_lockptr(m), a, file, line);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-06-11 20:15:19 +00:00
|
|
|
#ifdef INVARIANTS
|
|
|
|
void
|
|
|
|
vm_page_object_lock_assert(vm_page_t m)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Certain of the page's fields may only be modified by the
|
2013-08-09 11:11:11 +00:00
|
|
|
* holder of the containing object's lock or the exclusive busy.
|
|
|
|
* holder. Unfortunately, the holder of the write busy is
|
|
|
|
* not recorded, and thus cannot be checked here.
|
2011-06-11 20:15:19 +00:00
|
|
|
*/
|
2013-08-09 11:11:11 +00:00
|
|
|
if (m->object != NULL && !vm_page_xbusied(m))
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(m->object);
|
2011-06-11 20:15:19 +00:00
|
|
|
}
|
2014-08-09 05:00:34 +00:00
|
|
|
|
|
|
|
void
|
|
|
|
vm_page_assert_pga_writeable(vm_page_t m, uint8_t bits)
|
|
|
|
{
|
|
|
|
|
|
|
|
if ((bits & PGA_WRITEABLE) == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The PGA_WRITEABLE flag can only be set if the page is
|
|
|
|
* managed, is exclusively busied or the object is locked.
|
|
|
|
* Currently, this flag is only set by pmap_enter().
|
|
|
|
*/
|
|
|
|
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
|
|
|
|
("PGA_WRITEABLE on unmanaged page"));
|
|
|
|
if (!vm_page_xbusied(m))
|
|
|
|
VM_OBJECT_ASSERT_LOCKED(m->object);
|
|
|
|
}
|
2011-06-11 20:15:19 +00:00
|
|
|
#endif
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
#include "opt_ddb.h"
|
1995-04-16 09:59:16 +00:00
|
|
|
#ifdef DDB
|
1996-09-14 11:54:59 +00:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
|
|
|
|
#include <ddb/ddb.h>
|
|
|
|
|
|
|
|
DB_SHOW_COMMAND(page, vm_page_print_page_info)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
{
|
2014-03-22 10:26:09 +00:00
|
|
|
db_printf("vm_cnt.v_free_count: %d\n", vm_cnt.v_free_count);
|
|
|
|
db_printf("vm_cnt.v_cache_count: %d\n", vm_cnt.v_cache_count);
|
|
|
|
db_printf("vm_cnt.v_inactive_count: %d\n", vm_cnt.v_inactive_count);
|
|
|
|
db_printf("vm_cnt.v_active_count: %d\n", vm_cnt.v_active_count);
|
|
|
|
db_printf("vm_cnt.v_wire_count: %d\n", vm_cnt.v_wire_count);
|
|
|
|
db_printf("vm_cnt.v_free_reserved: %d\n", vm_cnt.v_free_reserved);
|
|
|
|
db_printf("vm_cnt.v_free_min: %d\n", vm_cnt.v_free_min);
|
|
|
|
db_printf("vm_cnt.v_free_target: %d\n", vm_cnt.v_free_target);
|
|
|
|
db_printf("vm_cnt.v_cache_min: %d\n", vm_cnt.v_cache_min);
|
|
|
|
db_printf("vm_cnt.v_inactive_target: %d\n", vm_cnt.v_inactive_target);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-09-08 20:44:49 +00:00
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
DB_SHOW_COMMAND(pageq, vm_page_print_pageq_info)
|
1996-09-08 20:44:49 +00:00
|
|
|
{
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
int dom;
|
|
|
|
|
|
|
|
db_printf("pq_free %d pq_cache %d\n",
|
2014-03-22 10:26:09 +00:00
|
|
|
vm_cnt.v_free_count, vm_cnt.v_cache_count);
|
Split the pagequeues per NUMA domains, and split pageademon process
into threads each processing queue in a single domain. The structure
of the pagedaemons and queues is kept intact, most of the changes come
from the need for code to find an owning page queue for given page,
calculated from the segment containing the page.
The tie between NUMA domain and pagedaemon thread/pagequeue split is
rather arbitrary, the multithreaded daemon could be allowed for the
single-domain machines, or one domain might be split into several page
domains, to further increase concurrency.
Right now, each pagedaemon thread tries to reach the global target,
precalculated at the start of the pass. This is not optimal, since it
could cause excessive page deactivation and freeing. The code should
be changed to re-check the global page deficit state in the loop after
some number of iterations.
The pagedaemons reach the quorum before starting the OOM, since one
thread inability to meet the target is normal for split queues. Only
when all pagedaemons fail to produce enough reusable pages, OOM is
started by single selected thread.
Launder is modified to take into account the segments layout with
regard to the region for which cleaning is performed.
Based on the preliminary patch by jeff, sponsored by EMC / Isilon
Storage Division.
Reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
2013-08-07 16:36:38 +00:00
|
|
|
for (dom = 0; dom < vm_ndomains; dom++) {
|
|
|
|
db_printf(
|
|
|
|
"dom %d page_cnt %d free %d pq_act %d pq_inact %d pass %d\n",
|
|
|
|
dom,
|
|
|
|
vm_dom[dom].vmd_page_count,
|
|
|
|
vm_dom[dom].vmd_free_count,
|
|
|
|
vm_dom[dom].vmd_pagequeues[PQ_ACTIVE].pq_cnt,
|
|
|
|
vm_dom[dom].vmd_pagequeues[PQ_INACTIVE].pq_cnt,
|
|
|
|
vm_dom[dom].vmd_pass);
|
|
|
|
}
|
1996-09-08 20:44:49 +00:00
|
|
|
}
|
2013-05-21 11:04:00 +00:00
|
|
|
|
|
|
|
DB_SHOW_COMMAND(pginfo, vm_page_print_pginfo)
|
|
|
|
{
|
|
|
|
vm_page_t m;
|
|
|
|
boolean_t phys;
|
|
|
|
|
|
|
|
if (!have_addr) {
|
|
|
|
db_printf("show pginfo addr\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
phys = strchr(modif, 'p') != NULL;
|
|
|
|
if (phys)
|
|
|
|
m = PHYS_TO_VM_PAGE(addr);
|
|
|
|
else
|
|
|
|
m = (vm_page_t)addr;
|
|
|
|
db_printf(
|
|
|
|
"page %p obj %p pidx 0x%jx phys 0x%jx q %d hold %d wire %d\n"
|
2013-08-09 11:11:11 +00:00
|
|
|
" af 0x%x of 0x%x f 0x%x act %d busy %x valid 0x%x dirty 0x%x\n",
|
2013-05-21 11:04:00 +00:00
|
|
|
m, m->object, (uintmax_t)m->pindex, (uintmax_t)m->phys_addr,
|
|
|
|
m->queue, m->hold_count, m->wire_count, m->aflags, m->oflags,
|
2013-08-09 11:11:11 +00:00
|
|
|
m->flags, m->act_count, m->busy_lock, m->valid, m->dirty);
|
2013-05-21 11:04:00 +00:00
|
|
|
}
|
1996-09-14 11:54:59 +00:00
|
|
|
#endif /* DDB */
|