2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
2017-11-30 15:48:35 +00:00
|
|
|
* SPDX-License-Identifier: (BSD-3-Clause AND MIT-CMU)
|
2017-11-20 19:43:44 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* The Mach Operating System project at Carnegie-Mellon University.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2017-02-28 23:42:47 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
1994-05-24 10:09:53 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1994-08-02 07:55:43 +00:00
|
|
|
* from: @(#)vm_object.c 8.5 (Berkeley) 3/22/94
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Copyright (c) 1987, 1990 Carnegie-Mellon University.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Authors: Avadis Tevanian, Jr., Michael Wayne Young
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Permission to use, copy, modify and distribute this software and
|
|
|
|
* its documentation is hereby granted, provided that both the copyright
|
|
|
|
* notice and this permission notice appear in all copies of the
|
|
|
|
* software, derivative works or modified versions, and any portions
|
|
|
|
* thereof, and that both notices appear in supporting documentation.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
|
|
|
* CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
|
|
|
|
* CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
|
1994-05-24 10:09:53 +00:00
|
|
|
* FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Carnegie Mellon requests users of this software to return to
|
|
|
|
*
|
|
|
|
* Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
|
|
|
|
* School of Computer Science
|
|
|
|
* Carnegie Mellon University
|
|
|
|
* Pittsburgh PA 15213-3890
|
|
|
|
*
|
|
|
|
* any improvements or extensions that they make and grant Carnegie the
|
|
|
|
* rights to redistribute these changes.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Virtual memory object module.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 23:50:51 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2007-12-29 19:53:04 +00:00
|
|
|
#include "opt_vm.h"
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
2018-01-12 22:48:23 +00:00
|
|
|
#include <sys/cpuset.h>
|
2001-05-01 08:13:21 +00:00
|
|
|
#include <sys/lock.h>
|
1996-05-19 07:36:50 +00:00
|
|
|
#include <sys/mman.h>
|
1998-05-21 07:47:58 +00:00
|
|
|
#include <sys/mount.h>
|
2002-03-06 02:42:56 +00:00
|
|
|
#include <sys/kernel.h>
|
2017-08-25 23:13:21 +00:00
|
|
|
#include <sys/pctrie.h>
|
2002-03-06 02:42:56 +00:00
|
|
|
#include <sys/sysctl.h>
|
2001-01-24 12:35:55 +00:00
|
|
|
#include <sys/mutex.h>
|
2001-05-01 08:13:21 +00:00
|
|
|
#include <sys/proc.h> /* for curproc, pageproc */
|
|
|
|
#include <sys/socket.h>
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
#include <sys/resourcevar.h>
|
2013-03-09 02:32:23 +00:00
|
|
|
#include <sys/rwlock.h>
|
2015-05-27 18:11:05 +00:00
|
|
|
#include <sys/user.h>
|
2001-05-01 08:13:21 +00:00
|
|
|
#include <sys/vnode.h>
|
|
|
|
#include <sys/vmmeter.h>
|
2001-03-28 11:52:56 +00:00
|
|
|
#include <sys/sx.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <vm/vm.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_map.h>
|
|
|
|
#include <vm/vm_object.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm_page.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <vm/vm_pageout.h>
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
#include <vm/vm_pager.h>
|
2018-02-06 22:10:07 +00:00
|
|
|
#include <vm/vm_phys.h>
|
|
|
|
#include <vm/vm_pagequeue.h>
|
1994-10-09 01:52:19 +00:00
|
|
|
#include <vm/swap_pager.h>
|
1995-02-02 09:09:15 +00:00
|
|
|
#include <vm/vm_kern.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_extern.h>
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
#include <vm/vm_radix.h>
|
2007-12-29 19:53:04 +00:00
|
|
|
#include <vm/vm_reserv.h>
|
2002-03-20 04:02:59 +00:00
|
|
|
#include <vm/uma.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
|
2004-05-25 18:40:53 +00:00
|
|
|
static int old_msync;
|
|
|
|
SYSCTL_INT(_vm, OID_AUTO, old_msync, CTLFLAG_RW, &old_msync, 0,
|
|
|
|
"Use old (insecure) msync behavior");
|
|
|
|
|
2010-07-04 19:02:32 +00:00
|
|
|
static int vm_object_page_collect_flush(vm_object_t object, vm_page_t p,
|
2012-03-17 23:00:32 +00:00
|
|
|
int pagerflags, int flags, boolean_t *clearobjflags,
|
|
|
|
boolean_t *eio);
|
2010-12-29 12:53:53 +00:00
|
|
|
static boolean_t vm_object_page_remove_write(vm_page_t p, int flags,
|
2012-03-17 23:00:32 +00:00
|
|
|
boolean_t *clearobjflags);
|
2002-03-06 02:42:56 +00:00
|
|
|
static void vm_object_qcollapse(vm_object_t object);
|
2006-01-22 23:56:20 +00:00
|
|
|
static void vm_object_vndeallocate(vm_object_t object);
|
1995-04-09 06:03:56 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Virtual memory objects maintain the actual data
|
|
|
|
* associated with allocated virtual memory. A given
|
|
|
|
* page of memory exists within exactly one object.
|
|
|
|
*
|
|
|
|
* An object is only deallocated when all "references"
|
|
|
|
* are given up. Only one "reference" to a given
|
|
|
|
* region of an object should be writeable.
|
|
|
|
*
|
|
|
|
* Associated with each object is a list of all resident
|
|
|
|
* memory pages belonging to that object; this list is
|
|
|
|
* maintained by the "vm_page" module, and locked by the object's
|
|
|
|
* lock.
|
|
|
|
*
|
|
|
|
* Each object also records a "pager" routine which is
|
|
|
|
* used to retrieve (and store) pages to the proper backing
|
|
|
|
* storage. In addition, objects may be backed by other
|
|
|
|
* objects from which they were virtual-copied.
|
|
|
|
*
|
|
|
|
* The only items within the object structure which are
|
|
|
|
* modified after time of creation are:
|
|
|
|
* reference count locked by object's lock
|
|
|
|
* pager routine locked by object's lock
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
1995-07-29 11:44:31 +00:00
|
|
|
struct object_q vm_object_list;
|
2002-04-20 07:23:22 +00:00
|
|
|
struct mtx vm_object_list_mtx; /* lock for object list and count */
|
2003-06-01 23:59:48 +00:00
|
|
|
|
|
|
|
struct vm_object kernel_object_store;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2011-11-07 15:43:11 +00:00
|
|
|
static SYSCTL_NODE(_vm_stats, OID_AUTO, object, CTLFLAG_RD, 0,
|
|
|
|
"VM object stats");
|
2006-07-22 22:31:57 +00:00
|
|
|
|
2017-12-25 19:36:04 +00:00
|
|
|
static counter_u64_t object_collapses = EARLY_COUNTER;
|
|
|
|
SYSCTL_COUNTER_U64(_vm_stats_object, OID_AUTO, collapses, CTLFLAG_RD,
|
|
|
|
&object_collapses,
|
|
|
|
"VM object collapses");
|
2006-07-22 22:31:57 +00:00
|
|
|
|
2017-12-25 19:36:04 +00:00
|
|
|
static counter_u64_t object_bypasses = EARLY_COUNTER;
|
|
|
|
SYSCTL_COUNTER_U64(_vm_stats_object, OID_AUTO, bypasses, CTLFLAG_RD,
|
|
|
|
&object_bypasses,
|
|
|
|
"VM object bypasses");
|
|
|
|
|
|
|
|
static void
|
|
|
|
counter_startup(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
object_collapses = counter_u64_alloc(M_WAITOK);
|
|
|
|
object_bypasses = counter_u64_alloc(M_WAITOK);
|
|
|
|
}
|
|
|
|
SYSINIT(object_counters, SI_SUB_CPU, SI_ORDER_ANY, counter_startup, NULL);
|
2004-11-06 21:48:45 +00:00
|
|
|
|
2002-03-20 04:02:59 +00:00
|
|
|
static uma_zone_t obj_zone;
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2004-08-02 00:18:36 +00:00
|
|
|
static int vm_object_zinit(void *mem, int size, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
static void vm_object_zdtor(void *mem, int size, void *arg);
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_object_zdtor(void *mem, int size, void *arg)
|
|
|
|
{
|
|
|
|
vm_object_t object;
|
|
|
|
|
|
|
|
object = (vm_object_t)mem;
|
2015-05-08 19:43:37 +00:00
|
|
|
KASSERT(object->ref_count == 0,
|
|
|
|
("object %p ref_count = %d", object, object->ref_count));
|
2003-10-26 06:29:26 +00:00
|
|
|
KASSERT(TAILQ_EMPTY(&object->memq),
|
2013-03-04 13:10:59 +00:00
|
|
|
("object %p has resident pages in its memq", object));
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
KASSERT(vm_radix_is_empty(&object->rtree),
|
|
|
|
("object %p has resident pages in its trie", object));
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
KASSERT(LIST_EMPTY(&object->rvq),
|
|
|
|
("object %p has reservations",
|
|
|
|
object));
|
|
|
|
#endif
|
2002-03-19 09:11:49 +00:00
|
|
|
KASSERT(object->paging_in_progress == 0,
|
|
|
|
("object %p paging_in_progress = %d",
|
|
|
|
object, object->paging_in_progress));
|
|
|
|
KASSERT(object->resident_page_count == 0,
|
|
|
|
("object %p resident_page_count = %d",
|
|
|
|
object, object->resident_page_count));
|
|
|
|
KASSERT(object->shadow_count == 0,
|
|
|
|
("object %p shadow_count = %d",
|
|
|
|
object, object->shadow_count));
|
2015-05-08 19:43:37 +00:00
|
|
|
KASSERT(object->type == OBJT_DEAD,
|
|
|
|
("object %p has non-dead type %d",
|
|
|
|
object, object->type));
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2004-08-02 00:18:36 +00:00
|
|
|
static int
|
|
|
|
vm_object_zinit(void *mem, int size, int flags)
|
2002-03-19 09:11:49 +00:00
|
|
|
{
|
|
|
|
vm_object_t object;
|
|
|
|
|
|
|
|
object = (vm_object_t)mem;
|
2015-03-01 05:18:02 +00:00
|
|
|
rw_init_flags(&object->lock, "vm object", RW_DUPOK | RW_NEW);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/* These are true for any object that has been freed */
|
2015-05-08 19:43:37 +00:00
|
|
|
object->type = OBJT_DEAD;
|
|
|
|
object->ref_count = 0;
|
2017-07-19 20:52:47 +00:00
|
|
|
vm_radix_init(&object->rtree);
|
2002-03-19 09:11:49 +00:00
|
|
|
object->paging_in_progress = 0;
|
|
|
|
object->resident_page_count = 0;
|
|
|
|
object->shadow_count = 0;
|
2017-08-25 23:13:21 +00:00
|
|
|
object->flags = OBJ_DEAD;
|
2015-05-08 19:43:37 +00:00
|
|
|
|
|
|
|
mtx_lock(&vm_object_list_mtx);
|
|
|
|
TAILQ_INSERT_TAIL(&vm_object_list, object, object_list);
|
|
|
|
mtx_unlock(&vm_object_list_mtx);
|
2004-08-02 00:18:36 +00:00
|
|
|
return (0);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-02-26 23:35:27 +00:00
|
|
|
static void
|
2002-06-25 22:14:06 +00:00
|
|
|
_vm_object_allocate(objtype_t type, vm_pindex_t size, vm_object_t object)
|
1994-05-25 09:21:21 +00:00
|
|
|
{
|
2001-07-04 16:20:28 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
TAILQ_INIT(&object->memq);
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_INIT(&object->shadow_head);
|
1995-02-02 09:09:15 +00:00
|
|
|
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
object->type = type;
|
2017-08-25 23:13:21 +00:00
|
|
|
if (type == OBJT_SWAP)
|
|
|
|
pctrie_init(&object->un_pager.swp.swp_blks);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure that swap_pager_swapoff() iteration over object_list
|
|
|
|
* sees up to date type and pctrie head if it observed
|
|
|
|
* non-dead object.
|
|
|
|
*/
|
|
|
|
atomic_thread_fence_rel();
|
|
|
|
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
switch (type) {
|
|
|
|
case OBJT_DEAD:
|
|
|
|
panic("_vm_object_allocate: can't create OBJT_DEAD");
|
|
|
|
case OBJT_DEFAULT:
|
|
|
|
case OBJT_SWAP:
|
|
|
|
object->flags = OBJ_ONEMAPPING;
|
|
|
|
break;
|
|
|
|
case OBJT_DEVICE:
|
|
|
|
case OBJT_SG:
|
|
|
|
object->flags = OBJ_FICTITIOUS | OBJ_UNMANAGED;
|
|
|
|
break;
|
|
|
|
case OBJT_MGTDEVICE:
|
|
|
|
object->flags = OBJ_FICTITIOUS;
|
|
|
|
break;
|
|
|
|
case OBJT_PHYS:
|
|
|
|
object->flags = OBJ_UNMANAGED;
|
|
|
|
break;
|
|
|
|
case OBJT_VNODE:
|
|
|
|
object->flags = 0;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
panic("_vm_object_allocate: type %d is undefined", type);
|
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
object->size = size;
|
2003-09-13 20:07:26 +00:00
|
|
|
object->generation = 1;
|
1995-02-02 09:09:15 +00:00
|
|
|
object->ref_count = 1;
|
2009-07-12 23:31:20 +00:00
|
|
|
object->memattr = VM_MEMATTR_DEFAULT;
|
2010-12-02 17:37:16 +00:00
|
|
|
object->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge = 0;
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
object->handle = NULL;
|
|
|
|
object->backing_object = NULL;
|
1995-12-11 04:58:34 +00:00
|
|
|
object->backing_object_offset = (vm_ooffset_t) 0;
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
LIST_INIT(&object->rvq);
|
|
|
|
#endif
|
2016-02-28 17:52:33 +00:00
|
|
|
umtx_shm_object_init(object);
|
1994-05-25 09:21:21 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_object_init:
|
|
|
|
*
|
|
|
|
* Initialize the VM objects module.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_init(void)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
TAILQ_INIT(&vm_object_list);
|
2002-04-04 21:03:38 +00:00
|
|
|
mtx_init(&vm_object_list_mtx, "vm object_list", NULL, MTX_DEF);
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
rw_init(&kernel_object->lock, "kernel vm object");
|
2017-03-14 19:39:17 +00:00
|
|
|
_vm_object_allocate(OBJT_PHYS, atop(VM_MAX_KERNEL_ADDRESS -
|
|
|
|
VM_MIN_KERNEL_ADDRESS), kernel_object);
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
kernel_object->flags |= OBJ_COLORED;
|
|
|
|
kernel_object->pg_color = (u_short)atop(VM_MIN_KERNEL_ADDRESS);
|
|
|
|
#endif
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2005-08-10 00:17:36 +00:00
|
|
|
/*
|
|
|
|
* The lock portion of struct vm_object must be type stable due
|
|
|
|
* to vm_pageout_fallback_object_lock locking a vm object
|
|
|
|
* without holding any references to it.
|
|
|
|
*/
|
2002-03-19 09:11:49 +00:00
|
|
|
obj_zone = uma_zcreate("VM OBJECT", sizeof (struct vm_object), NULL,
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
vm_object_zdtor,
|
|
|
|
#else
|
|
|
|
NULL,
|
|
|
|
#endif
|
2013-08-07 06:21:20 +00:00
|
|
|
vm_object_zinit, NULL, UMA_ALIGN_PTR, UMA_ZONE_NOFREE);
|
Sync back vmcontention branch into HEAD:
Replace the per-object resident and cached pages splay tree with a
path-compressed multi-digit radix trie.
Along with this, switch also the x86-specific handling of idle page
tables to using the radix trie.
This change is supposed to do the following:
- Allowing the acquisition of read locking for lookup operations of the
resident/cached pages collections as the per-vm_page_t splay iterators
are now removed.
- Increase the scalability of the operations on the page collections.
The radix trie does rely on the consumers locking to ensure atomicity of
its operations. In order to avoid deadlocks the bisection nodes are
pre-allocated in the UMA zone. This can be done safely because the
algorithm needs at maximum one new node per insert which means the
maximum number of the desired nodes is the number of available physical
frames themselves. However, not all the times a new bisection node is
really needed.
The radix trie implements path-compression because UFS indirect blocks
can lead to several objects with a very sparse trie, increasing the number
of levels to usually scan. It also helps in the nodes pre-fetching by
introducing the single node per-insert property.
This code is not generalized (yet) because of the possible loss of
performance by having much of the sizes in play configurable.
However, efforts to make this code more general and then reusable in
further different consumers might be really done.
The only KPI change is the removal of the function vm_page_splay() which
is now reaped.
The only KBI change, instead, is the removal of the left/right iterators
from struct vm_page, which are now reaped.
Further technical notes broken into mealpieces can be retrieved from the
svn branch:
http://svn.freebsd.org/base/user/attilio/vmcontention/
Sponsored by: EMC / Isilon storage division
In collaboration with: alc, jeff
Tested by: flo, pho, jhb, davide
Tested by: ian (arm)
Tested by: andreast (powerpc)
2013-03-18 00:25:02 +00:00
|
|
|
|
2017-07-19 20:52:47 +00:00
|
|
|
vm_radix_zinit();
|
1997-09-21 04:24:27 +00:00
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
|
|
|
vm_object_clear_flag(vm_object_t object, u_short bits)
|
|
|
|
{
|
2003-01-03 19:19:08 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2001-07-31 04:03:53 +00:00
|
|
|
object->flags &= ~bits;
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2009-07-12 23:31:20 +00:00
|
|
|
/*
|
|
|
|
* Sets the default memory attribute for the specified object. Pages
|
|
|
|
* that are allocated to this object are by default assigned this memory
|
|
|
|
* attribute.
|
|
|
|
*
|
|
|
|
* Presently, this function must be called before any pages are allocated
|
|
|
|
* to the object. In the future, this requirement may be relaxed for
|
|
|
|
* "default" and "swap" objects.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_object_set_memattr(vm_object_t object, vm_memattr_t memattr)
|
|
|
|
{
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2009-07-12 23:31:20 +00:00
|
|
|
switch (object->type) {
|
|
|
|
case OBJT_DEFAULT:
|
|
|
|
case OBJT_DEVICE:
|
2012-11-28 18:29:34 +00:00
|
|
|
case OBJT_MGTDEVICE:
|
2009-07-12 23:31:20 +00:00
|
|
|
case OBJT_PHYS:
|
2009-07-24 13:50:29 +00:00
|
|
|
case OBJT_SG:
|
2009-07-12 23:31:20 +00:00
|
|
|
case OBJT_SWAP:
|
|
|
|
case OBJT_VNODE:
|
|
|
|
if (!TAILQ_EMPTY(&object->memq))
|
|
|
|
return (KERN_FAILURE);
|
|
|
|
break;
|
|
|
|
case OBJT_DEAD:
|
|
|
|
return (KERN_INVALID_ARGUMENT);
|
2012-11-28 18:29:34 +00:00
|
|
|
default:
|
|
|
|
panic("vm_object_set_memattr: object %p is of undefined type",
|
|
|
|
object);
|
2009-07-12 23:31:20 +00:00
|
|
|
}
|
|
|
|
object->memattr = memattr;
|
|
|
|
return (KERN_SUCCESS);
|
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
|
|
|
vm_object_pip_add(vm_object_t object, short i)
|
|
|
|
{
|
2003-04-13 00:43:48 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2001-07-31 04:03:53 +00:00
|
|
|
object->paging_in_progress += i;
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_object_pip_subtract(vm_object_t object, short i)
|
|
|
|
{
|
2003-04-21 06:33:52 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2001-07-31 04:03:53 +00:00
|
|
|
object->paging_in_progress -= i;
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_object_pip_wakeup(vm_object_t object)
|
|
|
|
{
|
2003-04-13 00:43:48 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2001-07-31 04:03:53 +00:00
|
|
|
object->paging_in_progress--;
|
2001-07-04 20:15:18 +00:00
|
|
|
if ((object->flags & OBJ_PIPWNT) && object->paging_in_progress == 0) {
|
|
|
|
vm_object_clear_flag(object, OBJ_PIPWNT);
|
|
|
|
wakeup(object);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_object_pip_wakeupn(vm_object_t object, short i)
|
|
|
|
{
|
2003-04-21 06:33:52 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2001-07-04 20:15:18 +00:00
|
|
|
if (i)
|
2001-07-31 04:03:53 +00:00
|
|
|
object->paging_in_progress -= i;
|
2001-07-04 20:15:18 +00:00
|
|
|
if ((object->flags & OBJ_PIPWNT) && object->paging_in_progress == 0) {
|
|
|
|
vm_object_clear_flag(object, OBJ_PIPWNT);
|
|
|
|
wakeup(object);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_object_pip_wait(vm_object_t object, char *waitid)
|
|
|
|
{
|
2003-04-26 18:33:18 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2003-04-26 18:33:18 +00:00
|
|
|
while (object->paging_in_progress) {
|
|
|
|
object->flags |= OBJ_PIPWNT;
|
2013-02-26 17:22:08 +00:00
|
|
|
VM_OBJECT_SLEEP(object, object, PVM, waitid, 0);
|
2003-04-26 18:33:18 +00:00
|
|
|
}
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
At long last, commit the zero copy sockets code.
MAKEDEV: Add MAKEDEV glue for the ti(4) device nodes.
ti.4: Update the ti(4) man page to include information on the
TI_JUMBO_HDRSPLIT and TI_PRIVATE_JUMBOS kernel options,
and also include information about the new character
device interface and the associated ioctls.
man9/Makefile: Add jumbo.9 and zero_copy.9 man pages and associated
links.
jumbo.9: New man page describing the jumbo buffer allocator
interface and operation.
zero_copy.9: New man page describing the general characteristics of
the zero copy send and receive code, and what an
application author should do to take advantage of the
zero copy functionality.
NOTES: Add entries for ZERO_COPY_SOCKETS, TI_PRIVATE_JUMBOS,
TI_JUMBO_HDRSPLIT, MSIZE, and MCLSHIFT.
conf/files: Add uipc_jumbo.c and uipc_cow.c.
conf/options: Add the 5 options mentioned above.
kern_subr.c: Receive side zero copy implementation. This takes
"disposable" pages attached to an mbuf, gives them to
a user process, and then recycles the user's page.
This is only active when ZERO_COPY_SOCKETS is turned on
and the kern.ipc.zero_copy.receive sysctl variable is
set to 1.
uipc_cow.c: Send side zero copy functions. Takes a page written
by the user and maps it copy on write and assigns it
kernel virtual address space. Removes copy on write
mapping once the buffer has been freed by the network
stack.
uipc_jumbo.c: Jumbo disposable page allocator code. This allocates
(optionally) disposable pages for network drivers that
want to give the user the option of doing zero copy
receive.
uipc_socket.c: Add kern.ipc.zero_copy.{send,receive} sysctls that are
enabled if ZERO_COPY_SOCKETS is turned on.
Add zero copy send support to sosend() -- pages get
mapped into the kernel instead of getting copied if
they meet size and alignment restrictions.
uipc_syscalls.c:Un-staticize some of the sf* functions so that they
can be used elsewhere. (uipc_cow.c)
if_media.c: In the SIOCGIFMEDIA ioctl in ifmedia_ioctl(), avoid
calling malloc() with M_WAITOK. Return an error if
the M_NOWAIT malloc fails.
The ti(4) driver and the wi(4) driver, at least, call
this with a mutex held. This causes witness warnings
for 'ifconfig -a' with a wi(4) or ti(4) board in the
system. (I've only verified for ti(4)).
ip_output.c: Fragment large datagrams so that each segment contains
a multiple of PAGE_SIZE amount of data plus headers.
This allows the receiver to potentially do page
flipping on receives.
if_ti.c: Add zero copy receive support to the ti(4) driver. If
TI_PRIVATE_JUMBOS is not defined, it now uses the
jumbo(9) buffer allocator for jumbo receive buffers.
Add a new character device interface for the ti(4)
driver for the new debugging interface. This allows
(a patched version of) gdb to talk to the Tigon board
and debug the firmware. There are also a few additional
debugging ioctls available through this interface.
Add header splitting support to the ti(4) driver.
Tweak some of the default interrupt coalescing
parameters to more useful defaults.
Add hooks for supporting transmit flow control, but
leave it turned off with a comment describing why it
is turned off.
if_tireg.h: Change the firmware rev to 12.4.11, since we're really
at 12.4.11 plus fixes from 12.4.13.
Add defines needed for debugging.
Remove the ti_stats structure, it is now defined in
sys/tiio.h.
ti_fw.h: 12.4.11 firmware.
ti_fw2.h: 12.4.11 firmware, plus selected fixes from 12.4.13,
and my header splitting patches. Revision 12.4.13
doesn't handle 10/100 negotiation properly. (This
firmware is the same as what was in the tree previously,
with the addition of header splitting support.)
sys/jumbo.h: Jumbo buffer allocator interface.
sys/mbuf.h: Add a new external mbuf type, EXT_DISPOSABLE, to
indicate that the payload buffer can be thrown away /
flipped to a userland process.
socketvar.h: Add prototype for socow_setup.
tiio.h: ioctl interface to the character portion of the ti(4)
driver, plus associated structure/type definitions.
uio.h: Change prototype for uiomoveco() so that we'll know
whether the source page is disposable.
ufs_readwrite.c:Update for new prototype of uiomoveco().
vm_fault.c: In vm_fault(), check to see whether we need to do a page
based copy on write fault.
vm_object.c: Add a new function, vm_object_allocate_wait(). This
does the same thing that vm_object allocate does, except
that it gives the caller the opportunity to specify whether
it should wait on the uma_zalloc() of the object structre.
This allows vm objects to be allocated while holding a
mutex. (Without generating WITNESS warnings.)
vm_object_allocate() is implemented as a call to
vm_object_allocate_wait() with the malloc flag set to
M_WAITOK.
vm_object.h: Add prototype for vm_object_allocate_wait().
vm_page.c: Add page-based copy on write setup, clear and fault
routines.
vm_page.h: Add page based COW function prototypes and variable in
the vm_page structure.
Many thanks to Drew Gallatin, who wrote the zero copy send and receive
code, and to all the other folks who have tested and reviewed this code
over the years.
2002-06-26 03:37:47 +00:00
|
|
|
/*
|
|
|
|
* vm_object_allocate:
|
|
|
|
*
|
|
|
|
* Returns a new object with the given size.
|
|
|
|
*/
|
|
|
|
vm_object_t
|
|
|
|
vm_object_allocate(objtype_t type, vm_pindex_t size)
|
|
|
|
{
|
2004-12-08 05:01:47 +00:00
|
|
|
vm_object_t object;
|
|
|
|
|
|
|
|
object = (vm_object_t)uma_zalloc(obj_zone, M_WAITOK);
|
|
|
|
_vm_object_allocate(type, size, object);
|
|
|
|
return (object);
|
At long last, commit the zero copy sockets code.
MAKEDEV: Add MAKEDEV glue for the ti(4) device nodes.
ti.4: Update the ti(4) man page to include information on the
TI_JUMBO_HDRSPLIT and TI_PRIVATE_JUMBOS kernel options,
and also include information about the new character
device interface and the associated ioctls.
man9/Makefile: Add jumbo.9 and zero_copy.9 man pages and associated
links.
jumbo.9: New man page describing the jumbo buffer allocator
interface and operation.
zero_copy.9: New man page describing the general characteristics of
the zero copy send and receive code, and what an
application author should do to take advantage of the
zero copy functionality.
NOTES: Add entries for ZERO_COPY_SOCKETS, TI_PRIVATE_JUMBOS,
TI_JUMBO_HDRSPLIT, MSIZE, and MCLSHIFT.
conf/files: Add uipc_jumbo.c and uipc_cow.c.
conf/options: Add the 5 options mentioned above.
kern_subr.c: Receive side zero copy implementation. This takes
"disposable" pages attached to an mbuf, gives them to
a user process, and then recycles the user's page.
This is only active when ZERO_COPY_SOCKETS is turned on
and the kern.ipc.zero_copy.receive sysctl variable is
set to 1.
uipc_cow.c: Send side zero copy functions. Takes a page written
by the user and maps it copy on write and assigns it
kernel virtual address space. Removes copy on write
mapping once the buffer has been freed by the network
stack.
uipc_jumbo.c: Jumbo disposable page allocator code. This allocates
(optionally) disposable pages for network drivers that
want to give the user the option of doing zero copy
receive.
uipc_socket.c: Add kern.ipc.zero_copy.{send,receive} sysctls that are
enabled if ZERO_COPY_SOCKETS is turned on.
Add zero copy send support to sosend() -- pages get
mapped into the kernel instead of getting copied if
they meet size and alignment restrictions.
uipc_syscalls.c:Un-staticize some of the sf* functions so that they
can be used elsewhere. (uipc_cow.c)
if_media.c: In the SIOCGIFMEDIA ioctl in ifmedia_ioctl(), avoid
calling malloc() with M_WAITOK. Return an error if
the M_NOWAIT malloc fails.
The ti(4) driver and the wi(4) driver, at least, call
this with a mutex held. This causes witness warnings
for 'ifconfig -a' with a wi(4) or ti(4) board in the
system. (I've only verified for ti(4)).
ip_output.c: Fragment large datagrams so that each segment contains
a multiple of PAGE_SIZE amount of data plus headers.
This allows the receiver to potentially do page
flipping on receives.
if_ti.c: Add zero copy receive support to the ti(4) driver. If
TI_PRIVATE_JUMBOS is not defined, it now uses the
jumbo(9) buffer allocator for jumbo receive buffers.
Add a new character device interface for the ti(4)
driver for the new debugging interface. This allows
(a patched version of) gdb to talk to the Tigon board
and debug the firmware. There are also a few additional
debugging ioctls available through this interface.
Add header splitting support to the ti(4) driver.
Tweak some of the default interrupt coalescing
parameters to more useful defaults.
Add hooks for supporting transmit flow control, but
leave it turned off with a comment describing why it
is turned off.
if_tireg.h: Change the firmware rev to 12.4.11, since we're really
at 12.4.11 plus fixes from 12.4.13.
Add defines needed for debugging.
Remove the ti_stats structure, it is now defined in
sys/tiio.h.
ti_fw.h: 12.4.11 firmware.
ti_fw2.h: 12.4.11 firmware, plus selected fixes from 12.4.13,
and my header splitting patches. Revision 12.4.13
doesn't handle 10/100 negotiation properly. (This
firmware is the same as what was in the tree previously,
with the addition of header splitting support.)
sys/jumbo.h: Jumbo buffer allocator interface.
sys/mbuf.h: Add a new external mbuf type, EXT_DISPOSABLE, to
indicate that the payload buffer can be thrown away /
flipped to a userland process.
socketvar.h: Add prototype for socow_setup.
tiio.h: ioctl interface to the character portion of the ti(4)
driver, plus associated structure/type definitions.
uio.h: Change prototype for uiomoveco() so that we'll know
whether the source page is disposable.
ufs_readwrite.c:Update for new prototype of uiomoveco().
vm_fault.c: In vm_fault(), check to see whether we need to do a page
based copy on write fault.
vm_object.c: Add a new function, vm_object_allocate_wait(). This
does the same thing that vm_object allocate does, except
that it gives the caller the opportunity to specify whether
it should wait on the uma_zalloc() of the object structre.
This allows vm objects to be allocated while holding a
mutex. (Without generating WITNESS warnings.)
vm_object_allocate() is implemented as a call to
vm_object_allocate_wait() with the malloc flag set to
M_WAITOK.
vm_object.h: Add prototype for vm_object_allocate_wait().
vm_page.c: Add page-based copy on write setup, clear and fault
routines.
vm_page.h: Add page based COW function prototypes and variable in
the vm_page structure.
Many thanks to Drew Gallatin, who wrote the zero copy send and receive
code, and to all the other folks who have tested and reviewed this code
over the years.
2002-06-26 03:37:47 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_object_reference:
|
|
|
|
*
|
2003-05-01 03:29:20 +00:00
|
|
|
* Gets another reference to the given object. Note: OBJ_DEAD
|
|
|
|
* objects can be referenced during final cleaning.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-08-21 21:56:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_reference(vm_object_t object)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
if (object == NULL)
|
|
|
|
return;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2008-03-29 07:06:13 +00:00
|
|
|
vm_object_reference_locked(object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
}
|
|
|
|
|
2003-11-02 21:30:10 +00:00
|
|
|
/*
|
|
|
|
* vm_object_reference_locked:
|
|
|
|
*
|
|
|
|
* Gets another reference to the given object.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_object_reference_locked(vm_object_t object)
|
|
|
|
{
|
|
|
|
struct vnode *vp;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2003-11-02 21:30:10 +00:00
|
|
|
object->ref_count++;
|
|
|
|
if (object->type == OBJT_VNODE) {
|
|
|
|
vp = object->handle;
|
|
|
|
vref(vp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2001-05-19 01:28:09 +00:00
|
|
|
/*
|
2003-01-01 18:49:04 +00:00
|
|
|
* Handle deallocating an object of type OBJT_VNODE.
|
2001-05-19 01:28:09 +00:00
|
|
|
*/
|
2006-01-22 23:56:20 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_vndeallocate(vm_object_t object)
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
{
|
|
|
|
struct vnode *vp = (struct vnode *) object->handle;
|
1999-01-10 01:58:29 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
1999-01-08 17:31:30 +00:00
|
|
|
KASSERT(object->type == OBJT_VNODE,
|
1999-01-10 01:58:29 +00:00
|
|
|
("vm_object_vndeallocate: not a vnode object"));
|
|
|
|
KASSERT(vp != NULL, ("vm_object_vndeallocate: missing vp"));
|
|
|
|
#ifdef INVARIANTS
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
if (object->ref_count == 0) {
|
2016-08-10 16:12:31 +00:00
|
|
|
vn_printf(vp, "vm_object_vndeallocate ");
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
panic("vm_object_vndeallocate: bad object reference count");
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
}
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
#endif
|
|
|
|
|
Add implementation of robust mutexes, hopefully close enough to the
intention of the POSIX IEEE Std 1003.1TM-2008/Cor 1-2013.
A robust mutex is guaranteed to be cleared by the system upon either
thread or process owner termination while the mutex is held. The next
mutex locker is then notified about inconsistent mutex state and can
execute (or abandon) corrective actions.
The patch mostly consists of small changes here and there, adding
neccessary checks for the inconsistent and abandoned conditions into
existing paths. Additionally, the thread exit handler was extended to
iterate over the userspace-maintained list of owned robust mutexes,
unlocking and marking as terminated each of them.
The list of owned robust mutexes cannot be maintained atomically
synchronous with the mutex lock state (it is possible in kernel, but
is too expensive). Instead, for the duration of lock or unlock
operation, the current mutex is remembered in a special slot that is
also checked by the kernel at thread termination.
Kernel must be aware about the per-thread location of the heads of
robust mutex lists and the current active mutex slot. When a thread
touches a robust mutex for the first time, a new umtx op syscall is
issued which informs about location of lists heads.
The umtx sleep queues for PP and PI mutexes are split between
non-robust and robust.
Somewhat unrelated changes in the patch:
1. Style.
2. The fix for proper tdfind() call use in umtxq_sleep_pi() for shared
pi mutexes.
3. Removal of the userspace struct pthread_mutex m_owner field.
4. The sysctl kern.ipc.umtx_vnode_persistent is added, which controls
the lifetime of the shared mutex associated with a vnode' page.
Reviewed by: jilles (previous version, supposedly the objection was fixed)
Discussed with: brooks, Martin Simmons <martin@lispworks.com> (some aspects)
Tested by: pho
Sponsored by: The FreeBSD Foundation
2016-05-17 09:56:22 +00:00
|
|
|
if (!umtx_shm_vnobj_persistent && object->ref_count == 1)
|
2016-02-28 17:52:33 +00:00
|
|
|
umtx_shm_object_terminated(object);
|
|
|
|
|
2014-12-05 15:02:30 +00:00
|
|
|
/*
|
|
|
|
* The test for text of vp vnode does not need a bypass to
|
|
|
|
* reach right VV_TEXT there, since it is obtained from
|
|
|
|
* object->handle.
|
|
|
|
*/
|
|
|
|
if (object->ref_count > 1 || (vp->v_vflag & VV_TEXT) == 0) {
|
2011-02-13 21:52:26 +00:00
|
|
|
object->ref_count--;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2011-02-13 21:52:26 +00:00
|
|
|
/* vrele may need the vnode lock. */
|
|
|
|
vrele(vp);
|
|
|
|
} else {
|
2011-04-23 21:38:21 +00:00
|
|
|
vhold(vp);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2011-02-13 21:52:26 +00:00
|
|
|
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
|
2011-04-23 21:38:21 +00:00
|
|
|
vdrop(vp);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2011-02-13 21:52:26 +00:00
|
|
|
object->ref_count--;
|
2011-04-23 21:38:21 +00:00
|
|
|
if (object->type == OBJT_DEAD) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2011-04-23 21:38:21 +00:00
|
|
|
VOP_UNLOCK(vp, 0);
|
|
|
|
} else {
|
|
|
|
if (object->ref_count == 0)
|
2012-09-28 11:25:02 +00:00
|
|
|
VOP_UNSET_TEXT(vp);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2011-04-23 21:38:21 +00:00
|
|
|
vput(vp);
|
|
|
|
}
|
1997-12-29 00:25:11 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_object_deallocate:
|
|
|
|
*
|
|
|
|
* Release a reference to the specified object,
|
|
|
|
* gained either through a vm_object_allocate
|
|
|
|
* or a vm_object_reference call. When all references
|
|
|
|
* are gone, storage associated with this object
|
|
|
|
* may be relinquished.
|
|
|
|
*
|
|
|
|
* No object may be locked.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_deallocate(vm_object_t object)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_object_t temp;
|
2013-04-28 19:38:59 +00:00
|
|
|
struct vnode *vp;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
while (object != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
if (object->type == OBJT_VNODE) {
|
|
|
|
vm_object_vndeallocate(object);
|
2004-01-18 03:44:14 +00:00
|
|
|
return;
|
2012-10-22 17:50:54 +00:00
|
|
|
}
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
2001-03-04 20:25:23 +00:00
|
|
|
KASSERT(object->ref_count != 0,
|
|
|
|
("vm_object_deallocate: object deallocated too many times: %d", object->type));
|
1997-12-29 00:25:11 +00:00
|
|
|
|
|
|
|
/*
|
2001-03-04 20:25:23 +00:00
|
|
|
* If the reference count goes to 0 we start calling
|
|
|
|
* vm_object_terminate() on the object chain.
|
|
|
|
* A ref count of 1 may be a special case depending on the
|
|
|
|
* shadow count being 0 or 1.
|
1997-12-29 00:25:11 +00:00
|
|
|
*/
|
2001-03-04 20:25:23 +00:00
|
|
|
object->ref_count--;
|
|
|
|
if (object->ref_count > 1) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2004-01-18 03:44:14 +00:00
|
|
|
return;
|
2001-03-04 20:25:23 +00:00
|
|
|
} else if (object->ref_count == 1) {
|
2013-04-28 19:38:59 +00:00
|
|
|
if (object->type == OBJT_SWAP &&
|
|
|
|
(object->flags & OBJ_TMPFS) != 0) {
|
|
|
|
vp = object->un_pager.swp.swp_tmpfs;
|
|
|
|
vhold(vp);
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
|
|
|
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
|
|
|
|
VM_OBJECT_WLOCK(object);
|
2013-05-30 20:00:19 +00:00
|
|
|
if (object->type == OBJT_DEAD ||
|
|
|
|
object->ref_count != 1) {
|
2013-04-28 19:38:59 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
|
|
|
VOP_UNLOCK(vp, 0);
|
2014-03-12 15:13:57 +00:00
|
|
|
vdrop(vp);
|
2013-04-28 19:38:59 +00:00
|
|
|
return;
|
|
|
|
}
|
2013-05-30 20:00:19 +00:00
|
|
|
if ((object->flags & OBJ_TMPFS) != 0)
|
|
|
|
VOP_UNSET_TEXT(vp);
|
|
|
|
VOP_UNLOCK(vp, 0);
|
2014-03-12 15:13:57 +00:00
|
|
|
vdrop(vp);
|
2013-04-28 19:38:59 +00:00
|
|
|
}
|
Correct a long-standing error in vm_object_deallocate(). Specifically,
only anonymous default (OBJT_DEFAULT) and swap (OBJT_SWAP) objects should
ever have OBJ_ONEMAPPING set. However, vm_object_deallocate() was
setting it on device (OBJT_DEVICE) objects. As a result,
vm_object_page_remove() could be called on a device object and if that
occurred pmap_remove_all() would be called on the device object's pages.
However, a device object's pages are fictitious, and fictitious pages do
not have an initialized pv list (struct md_page).
To date, fictitious pages have been allocated from zeroed memory,
effectively hiding this problem. Now, however, the conversion of rotting
diagnostics to invariants in the amd64 and i386 pmaps has revealed the
problem. Specifically, assertion failures have occurred during the
initialization phase of the X server on some hardware.
MFC after: 1 week
Discussed with: Kostik Belousov
Reported by: Michiel Boland
2008-02-24 18:03:56 +00:00
|
|
|
if (object->shadow_count == 0 &&
|
|
|
|
object->handle == NULL &&
|
|
|
|
(object->type == OBJT_DEFAULT ||
|
2013-04-28 19:38:59 +00:00
|
|
|
(object->type == OBJT_SWAP &&
|
The OBJ_TMPFS flag of vm_object means that there is unreclaimed tmpfs
vnode for the tmpfs node owning this object. The flag is currently
used for two purposes. First, it allows to correctly handle VV_TEXT
for tmpfs vnode when the ref count on the object is decremented to 1,
similar to vnode_pager_dealloc() for regular filesystems. Second, it
prevents some operations, which are done on OBJT_SWAP vm objects
backing user anonymous memory, but are incorrect for the object owned
by tmpfs node.
The second kind of use of the OBJ_TMPFS flag is incorrect, since the
vnode might be reclaimed, which clears the flag, but vm object
operations must still be disallowed.
Introduce one more flag, OBJ_TMPFS_NODE, which is permanently set on
the object for VREG tmpfs node, and used instead of OBJ_TMPFS to test
whether vm object collapse and similar actions should be disabled.
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2014-07-14 09:30:37 +00:00
|
|
|
(object->flags & OBJ_TMPFS_NODE) == 0))) {
|
2001-03-04 20:25:23 +00:00
|
|
|
vm_object_set_flag(object, OBJ_ONEMAPPING);
|
|
|
|
} else if ((object->shadow_count == 1) &&
|
|
|
|
(object->handle == NULL) &&
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
(object->type == OBJT_DEFAULT ||
|
|
|
|
object->type == OBJT_SWAP)) {
|
1995-02-02 09:09:15 +00:00
|
|
|
vm_object_t robject;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
2003-05-18 04:10:16 +00:00
|
|
|
robject = LIST_FIRST(&object->shadow_head);
|
1999-01-08 17:31:30 +00:00
|
|
|
KASSERT(robject != NULL,
|
1999-01-10 01:58:29 +00:00
|
|
|
("vm_object_deallocate: ref_count: %d, shadow_count: %d",
|
1999-01-08 17:31:30 +00:00
|
|
|
object->ref_count,
|
|
|
|
object->shadow_count));
|
2014-07-24 10:25:42 +00:00
|
|
|
KASSERT((robject->flags & OBJ_TMPFS_NODE) == 0,
|
|
|
|
("shadowed tmpfs v_object %p", object));
|
2013-03-09 02:32:23 +00:00
|
|
|
if (!VM_OBJECT_TRYWLOCK(robject)) {
|
2003-06-04 21:07:42 +00:00
|
|
|
/*
|
|
|
|
* Avoid a potential deadlock.
|
|
|
|
*/
|
|
|
|
object->ref_count++;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2004-03-08 03:54:36 +00:00
|
|
|
/*
|
|
|
|
* More likely than not the thread
|
|
|
|
* holding robject's lock has lower
|
|
|
|
* priority than the current thread.
|
|
|
|
* Let the lower priority thread run.
|
|
|
|
*/
|
2007-02-27 19:40:26 +00:00
|
|
|
pause("vmo_de", 1);
|
2003-06-04 21:07:42 +00:00
|
|
|
continue;
|
|
|
|
}
|
Consider three objects, O, BO, and BBO, where BO is O's backing object
and BBO is BO's backing object. Now, suppose that O and BO are being
collapsed. Furthermore, suppose that BO has been marked dead
(OBJ_DEAD) by vm_object_backing_scan() and that either
vm_object_backing_scan() has been forced to sleep due to encountering
a busy page or vm_object_collapse() has been forced to sleep due to
memory allocation in the swap pager. If vm_object_deallocate() is
then called on BBO and BO is BBO's only shadow object,
vm_object_deallocate() will collapse BO and BBO. In doing so, it adds
a necessary temporary reference to BO. If this collapse also sleeps
and the prior collapse resumes first, the temporary reference will
cause vm_object_collapse to panic with the message "backing_object %p
was somehow re-referenced during collapse!"
Resolve this race by changing vm_object_deallocate() such that it
doesn't collapse BO and BBO if BO is marked dead. Once O and BO are
collapsed, vm_object_collapse() will attempt to collapse O and BBO.
So, vm_object_deallocate() on BBO need do nothing.
Reported by: Peter Holm on 20050107
URL: http://www.holm.cc/stress/log/cons102.html
In collaboration with: tegge@
Candidate for RELENG_4 and RELENG_5
MFC after: 2 weeks
2005-01-15 21:12:47 +00:00
|
|
|
/*
|
|
|
|
* Collapse object into its shadow unless its
|
|
|
|
* shadow is dead. In that case, object will
|
|
|
|
* be deallocated by the thread that is
|
|
|
|
* deallocating its shadow.
|
|
|
|
*/
|
|
|
|
if ((robject->flags & OBJ_DEAD) == 0 &&
|
|
|
|
(robject->handle == NULL) &&
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
(robject->type == OBJT_DEFAULT ||
|
|
|
|
robject->type == OBJT_SWAP)) {
|
1995-02-02 09:09:15 +00:00
|
|
|
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
robject->ref_count++;
|
2003-06-08 23:01:24 +00:00
|
|
|
retry:
|
|
|
|
if (robject->paging_in_progress) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2003-06-08 23:01:24 +00:00
|
|
|
vm_object_pip_wait(robject,
|
|
|
|
"objde1");
|
2006-07-17 06:45:03 +00:00
|
|
|
temp = robject->backing_object;
|
|
|
|
if (object == temp) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2006-07-17 06:45:03 +00:00
|
|
|
goto retry;
|
|
|
|
}
|
2003-06-08 23:01:24 +00:00
|
|
|
} else if (object->paging_in_progress) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(robject);
|
2003-06-08 23:01:24 +00:00
|
|
|
object->flags |= OBJ_PIPWNT;
|
2013-02-26 17:22:08 +00:00
|
|
|
VM_OBJECT_SLEEP(object, object,
|
2003-06-08 23:01:24 +00:00
|
|
|
PDROP | PVM, "objde2", 0);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(robject);
|
2006-07-17 06:45:03 +00:00
|
|
|
temp = robject->backing_object;
|
|
|
|
if (object == temp) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2006-07-17 06:45:03 +00:00
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
} else
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2006-07-17 06:45:03 +00:00
|
|
|
|
1999-01-21 08:29:12 +00:00
|
|
|
if (robject->ref_count == 1) {
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
robject->ref_count--;
|
1995-02-20 14:21:58 +00:00
|
|
|
object = robject;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
goto doterm;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
}
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
object = robject;
|
|
|
|
vm_object_collapse(object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
continue;
|
1995-01-05 04:30:40 +00:00
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(robject);
|
1995-01-05 04:30:40 +00:00
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2004-01-18 03:44:14 +00:00
|
|
|
return;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
}
|
|
|
|
doterm:
|
2016-02-28 17:52:33 +00:00
|
|
|
umtx_shm_object_terminated(object);
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
temp = object->backing_object;
|
2003-04-27 20:07:57 +00:00
|
|
|
if (temp != NULL) {
|
2014-07-24 10:25:42 +00:00
|
|
|
KASSERT((object->flags & OBJ_TMPFS_NODE) == 0,
|
|
|
|
("shadowed tmpfs v_object 2 %p", object));
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(temp);
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_REMOVE(object, shadow_list);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
temp->shadow_count--;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(temp);
|
1998-02-05 03:32:49 +00:00
|
|
|
object->backing_object = NULL;
|
1996-03-02 02:54:24 +00:00
|
|
|
}
|
2001-10-26 00:08:05 +00:00
|
|
|
/*
|
|
|
|
* Don't double-terminate, we could be in a termination
|
|
|
|
* recursion due to the terminate having to sync data
|
|
|
|
* to disk.
|
|
|
|
*/
|
|
|
|
if ((object->flags & OBJ_DEAD) == 0)
|
|
|
|
vm_object_terminate(object);
|
2003-04-26 19:36:19 +00:00
|
|
|
else
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
1994-05-24 10:09:53 +00:00
|
|
|
object = temp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-05-20 19:05:43 +00:00
|
|
|
/*
|
|
|
|
* vm_object_destroy removes the object from the global object list
|
|
|
|
* and frees the space for the object.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_object_destroy(vm_object_t object)
|
|
|
|
{
|
|
|
|
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
/*
|
|
|
|
* Release the allocation charge.
|
|
|
|
*/
|
2010-12-02 17:37:16 +00:00
|
|
|
if (object->cred != NULL) {
|
|
|
|
swap_release_by_cred(object->charge, object->cred);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge = 0;
|
2010-12-02 17:37:16 +00:00
|
|
|
crfree(object->cred);
|
|
|
|
object->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
|
|
|
|
2008-05-20 19:05:43 +00:00
|
|
|
/*
|
|
|
|
* Free the space for the object.
|
|
|
|
*/
|
|
|
|
uma_zfree(obj_zone, object);
|
|
|
|
}
|
|
|
|
|
2017-08-16 08:49:11 +00:00
|
|
|
/*
|
|
|
|
* vm_object_terminate_pages removes any remaining pageable pages
|
|
|
|
* from the object and resets the object to an empty state.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vm_object_terminate_pages(vm_object_t object)
|
|
|
|
{
|
|
|
|
vm_page_t p, p_next;
|
2018-04-24 21:15:54 +00:00
|
|
|
struct mtx *mtx;
|
2017-08-16 08:49:11 +00:00
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
|
2017-09-13 19:22:07 +00:00
|
|
|
mtx = NULL;
|
|
|
|
|
2017-08-16 08:49:11 +00:00
|
|
|
/*
|
|
|
|
* Free any remaining pageable pages. This also removes them from the
|
|
|
|
* paging queues. However, don't free wired pages, just remove them
|
|
|
|
* from the object. Rather than incrementally removing each page from
|
|
|
|
* the object, the page and object are reset to any empty state.
|
|
|
|
*/
|
|
|
|
TAILQ_FOREACH_SAFE(p, &object->memq, listq, p_next) {
|
|
|
|
vm_page_assert_unbusied(p);
|
2018-04-24 21:15:54 +00:00
|
|
|
if ((object->flags & OBJ_UNMANAGED) == 0)
|
2017-09-13 19:22:07 +00:00
|
|
|
/*
|
|
|
|
* vm_page_free_prep() only needs the page
|
|
|
|
* lock for managed pages.
|
|
|
|
*/
|
2018-04-24 21:15:54 +00:00
|
|
|
vm_page_change_lock(p, &mtx);
|
2017-08-16 08:49:11 +00:00
|
|
|
p->object = NULL;
|
2017-09-13 19:22:07 +00:00
|
|
|
if (p->wire_count != 0)
|
|
|
|
continue;
|
2018-04-24 21:15:54 +00:00
|
|
|
VM_CNT_INC(v_pfree);
|
|
|
|
vm_page_free(p);
|
2017-10-19 04:13:47 +00:00
|
|
|
}
|
2017-09-13 19:22:07 +00:00
|
|
|
if (mtx != NULL)
|
|
|
|
mtx_unlock(mtx);
|
|
|
|
|
2017-08-16 08:49:11 +00:00
|
|
|
/*
|
|
|
|
* If the object contained any pages, then reset it to an empty state.
|
|
|
|
* None of the object's fields, including "resident_page_count", were
|
|
|
|
* modified by the preceding loop.
|
|
|
|
*/
|
|
|
|
if (object->resident_page_count != 0) {
|
|
|
|
vm_radix_reclaim_allnodes(&object->rtree);
|
|
|
|
TAILQ_INIT(&object->memq);
|
|
|
|
object->resident_page_count = 0;
|
|
|
|
if (object->type == OBJT_VNODE)
|
|
|
|
vdrop(object->handle);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_object_terminate actually destroys the specified object, freeing
|
|
|
|
* up all previously used resources.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
1999-01-21 08:29:12 +00:00
|
|
|
* This routine may block.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_terminate(vm_object_t object)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2001-07-04 16:20:28 +00:00
|
|
|
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
/*
|
|
|
|
* Make sure no one uses us.
|
|
|
|
*/
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_set_flag(object, OBJ_DEAD);
|
1997-06-22 03:00:24 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1995-04-09 06:03:56 +00:00
|
|
|
* wait for the pageout daemon to be done with the object
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1998-02-25 03:56:15 +00:00
|
|
|
vm_object_pip_wait(object, "objtrm");
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-01-08 17:31:30 +00:00
|
|
|
KASSERT(!object->paging_in_progress,
|
|
|
|
("vm_object_terminate: pageout in progress"));
|
1994-05-25 09:21:21 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Clean and free the pages, as appropriate. All references to the
|
|
|
|
* object are gone, so we don't need to lock it.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
if (object->type == OBJT_VNODE) {
|
2003-05-04 19:23:40 +00:00
|
|
|
struct vnode *vp = (struct vnode *)object->handle;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Clean pages and flush buffers.
|
|
|
|
*/
|
This mega-commit is meant to fix numerous interrelated problems. There
has been some bitrot and incorrect assumptions in the vfs_bio code. These
problems have manifest themselves worse on NFS type filesystems, but can
still affect local filesystems under certain circumstances. Most of
the problems have involved mmap consistancy, and as a side-effect broke
the vfs.ioopt code. This code might have been committed seperately, but
almost everything is interrelated.
1) Allow (pmap_object_init_pt) prefaulting of buffer-busy pages that
are fully valid.
2) Rather than deactivating erroneously read initial (header) pages in
kern_exec, we now free them.
3) Fix the rundown of non-VMIO buffers that are in an inconsistent
(missing vp) state.
4) Fix the disassociation of pages from buffers in brelse. The previous
code had rotted and was faulty in a couple of important circumstances.
5) Remove a gratuitious buffer wakeup in vfs_vmio_release.
6) Remove a crufty and currently unused cluster mechanism for VBLK
files in vfs_bio_awrite. When the code is functional, I'll add back
a cleaner version.
7) The page busy count wakeups assocated with the buffer cache usage were
incorrectly cleaned up in a previous commit by me. Revert to the
original, correct version, but with a cleaner implementation.
8) The cluster read code now tries to keep data associated with buffers
more aggressively (without breaking the heuristics) when it is presumed
that the read data (buffers) will be soon needed.
9) Change to filesystem lockmgr locks so that they use LK_NOPAUSE. The
delay loop waiting is not useful for filesystem locks, due to the
length of the time intervals.
10) Correct and clean-up spec_getpages.
11) Implement a fully functional nfs_getpages, nfs_putpages.
12) Fix nfs_write so that modifications are coherent with the NFS data on
the server disk (at least as well as NFS seems to allow.)
13) Properly support MS_INVALIDATE on NFS.
14) Properly pass down MS_INVALIDATE to lower levels of the VM code from
vm_map_clean.
15) Better support the notion of pages being busy but valid, so that
fewer in-transit waits occur. (use p->busy more for pageouts instead
of PG_BUSY.) Since the page is fully valid, it is still usable for
reads.
16) It is possible (in error) for cached pages to be busy. Make the
page allocation code handle that case correctly. (It should probably
be a printf or panic, but I want the system to handle coding errors
robustly. I'll probably add a printf.)
17) Correct the design and usage of vm_page_sleep. It didn't handle
consistancy problems very well, so make the design a little less
lofty. After vm_page_sleep, if it ever blocked, it is still important
to relookup the page (if the object generation count changed), and
verify it's status (always.)
18) In vm_pageout.c, vm_pageout_clean had rotted, so clean that up.
19) Push the page busy for writes and VM_PROT_READ into vm_pageout_flush.
20) Fix vm_pager_put_pages and it's descendents to support an int flag
instead of a boolean, so that we can pass down the invalidate bit.
1998-03-07 21:37:31 +00:00
|
|
|
vm_object_page_clean(object, 0, 0, OBJPC_SYNC);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
2008-10-10 21:23:50 +00:00
|
|
|
vinvalbuf(vp, V_SAVE, 0, 0);
|
2003-05-04 19:23:40 +00:00
|
|
|
|
2016-07-11 14:19:09 +00:00
|
|
|
BO_LOCK(&vp->v_bufobj);
|
|
|
|
vp->v_bufobj.bo_flag |= BO_DEAD;
|
|
|
|
BO_UNLOCK(&vp->v_bufobj);
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
Some VM improvements, including elimination of alot of Sig-11
problems. Tor Egge and others have helped with various VM bugs
lately, but don't blame him -- blame me!!!
pmap.c:
1) Create an object for kernel page table allocations. This
fixes a bogus allocation method previously used for such, by
grabbing pages from the kernel object, using bogus pindexes.
(This was a code cleanup, and perhaps a minor system stability
issue.)
pmap.c:
2) Pre-set the modify and accessed bits when prudent. This will
decrease bus traffic under certain circumstances.
vfs_bio.c, vfs_cluster.c:
3) Rather than calculating the beginning virtual byte offset
multiple times, stick the offset into the buffer header, so
that the calculated offset can be reused. (Long long multiplies
are often expensive, and this is a probably unmeasurable performance
improvement, and code cleanup.)
vfs_bio.c:
4) Handle write recursion more intelligently (but not perfectly) so
that it is less likely to cause a system panic, and is also
much more robust.
vfs_bio.c:
5) getblk incorrectly wrote out blocks that are incorrectly sized.
The problem is fixed, and writes blocks out ONLY when B_DELWRI
is true.
vfs_bio.c:
6) Check that already constituted buffers have fully valid pages. If
not, then make sure that the B_CACHE bit is not set. (This was
a major source of Sig-11 type problems.)
vfs_bio.c:
7) Fix a potential system deadlock due to an incorrectly specified
sleep priority while waiting for a buffer write operation. The
change that I made opens the system up to serious problems, and
we need to examine the issue of process sleep priorities.
vfs_cluster.c, vfs_bio.c:
8) Make clustered reads work more correctly (and more completely)
when buffers are already constituted, but not fully valid.
(This was another system reliability issue.)
vfs_subr.c, ffs_inode.c:
9) Create a vtruncbuf function, which is used by filesystems that
can truncate files. The vinvalbuf forced a file sync type operation,
while vtruncbuf only invalidates the buffers past the new end of file,
and also invalidates the appropriate pages. (This was a system reliabiliy
and performance issue.)
10) Modify FFS to use vtruncbuf.
vm_object.c:
11) Make the object rundown mechanism for OBJT_VNODE type objects work
more correctly. Included in that fix, create pager entries for
the OBJT_DEAD pager type, so that paging requests that might slip
in during race conditions are properly handled. (This was a system
reliability issue.)
vm_page.c:
12) Make some of the page validation routines be a little less picky
about arguments passed to them. Also, support page invalidation
change the object generation count so that we handle generation
counts a little more robustly.
vm_pageout.c:
13) Further reduce pageout daemon activity when the system doesn't
need help from it. There should be no additional performance
decrease even when the pageout daemon is running. (This was
a significant performance issue.)
vnode_pager.c:
14) Teach the vnode pager to handle race conditions during vnode
deallocations.
1998-03-16 01:56:03 +00:00
|
|
|
}
|
|
|
|
|
2001-04-13 11:15:40 +00:00
|
|
|
KASSERT(object->ref_count == 0,
|
|
|
|
("vm_object_terminate: object with references, ref_count=%d",
|
|
|
|
object->ref_count));
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
2017-08-16 08:49:11 +00:00
|
|
|
if ((object->flags & OBJ_PG_DTOR) == 0)
|
|
|
|
vm_object_terminate_pages(object);
|
Some VM improvements, including elimination of alot of Sig-11
problems. Tor Egge and others have helped with various VM bugs
lately, but don't blame him -- blame me!!!
pmap.c:
1) Create an object for kernel page table allocations. This
fixes a bogus allocation method previously used for such, by
grabbing pages from the kernel object, using bogus pindexes.
(This was a code cleanup, and perhaps a minor system stability
issue.)
pmap.c:
2) Pre-set the modify and accessed bits when prudent. This will
decrease bus traffic under certain circumstances.
vfs_bio.c, vfs_cluster.c:
3) Rather than calculating the beginning virtual byte offset
multiple times, stick the offset into the buffer header, so
that the calculated offset can be reused. (Long long multiplies
are often expensive, and this is a probably unmeasurable performance
improvement, and code cleanup.)
vfs_bio.c:
4) Handle write recursion more intelligently (but not perfectly) so
that it is less likely to cause a system panic, and is also
much more robust.
vfs_bio.c:
5) getblk incorrectly wrote out blocks that are incorrectly sized.
The problem is fixed, and writes blocks out ONLY when B_DELWRI
is true.
vfs_bio.c:
6) Check that already constituted buffers have fully valid pages. If
not, then make sure that the B_CACHE bit is not set. (This was
a major source of Sig-11 type problems.)
vfs_bio.c:
7) Fix a potential system deadlock due to an incorrectly specified
sleep priority while waiting for a buffer write operation. The
change that I made opens the system up to serious problems, and
we need to examine the issue of process sleep priorities.
vfs_cluster.c, vfs_bio.c:
8) Make clustered reads work more correctly (and more completely)
when buffers are already constituted, but not fully valid.
(This was another system reliability issue.)
vfs_subr.c, ffs_inode.c:
9) Create a vtruncbuf function, which is used by filesystems that
can truncate files. The vinvalbuf forced a file sync type operation,
while vtruncbuf only invalidates the buffers past the new end of file,
and also invalidates the appropriate pages. (This was a system reliabiliy
and performance issue.)
10) Modify FFS to use vtruncbuf.
vm_object.c:
11) Make the object rundown mechanism for OBJT_VNODE type objects work
more correctly. Included in that fix, create pager entries for
the OBJT_DEAD pager type, so that paging requests that might slip
in during race conditions are properly handled. (This was a system
reliability issue.)
vm_page.c:
12) Make some of the page validation routines be a little less picky
about arguments passed to them. Also, support page invalidation
change the object generation count so that we handle generation
counts a little more robustly.
vm_pageout.c:
13) Further reduce pageout daemon activity when the system doesn't
need help from it. There should be no additional performance
decrease even when the pageout daemon is running. (This was
a significant performance issue.)
vnode_pager.c:
14) Teach the vnode pager to handle race conditions during vnode
deallocations.
1998-03-16 01:56:03 +00:00
|
|
|
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
if (__predict_false(!LIST_EMPTY(&object->rvq)))
|
|
|
|
vm_reserv_break_all(object);
|
|
|
|
#endif
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
|
2015-05-08 19:43:37 +00:00
|
|
|
KASSERT(object->cred == NULL || object->type == OBJT_DEFAULT ||
|
|
|
|
object->type == OBJT_SWAP,
|
|
|
|
("%s: non-swap obj %p has cred", __func__, object));
|
|
|
|
|
1998-10-23 05:43:13 +00:00
|
|
|
/*
|
|
|
|
* Let the pager know object is dead.
|
|
|
|
*/
|
|
|
|
vm_pager_deallocate(object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
1998-10-23 05:43:13 +00:00
|
|
|
|
2008-05-20 19:05:43 +00:00
|
|
|
vm_object_destroy(object);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2011-01-01 17:39:38 +00:00
|
|
|
/*
|
|
|
|
* Make the page read-only so that we can clear the object flags. However, if
|
|
|
|
* this is a nosync mmap then the object is likely to stay dirty so do not
|
|
|
|
* mess with the page and do not clear the object flags. Returns TRUE if the
|
|
|
|
* page should be flushed, and FALSE otherwise.
|
|
|
|
*/
|
2010-12-29 12:53:53 +00:00
|
|
|
static boolean_t
|
2012-03-17 23:00:32 +00:00
|
|
|
vm_object_page_remove_write(vm_page_t p, int flags, boolean_t *clearobjflags)
|
2010-12-29 12:53:53 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have been asked to skip nosync pages and this is a
|
|
|
|
* nosync page, skip it. Note that the object flags were not
|
|
|
|
* cleared in this case so we do not have to set them.
|
|
|
|
*/
|
|
|
|
if ((flags & OBJPC_NOSYNC) != 0 && (p->oflags & VPO_NOSYNC) != 0) {
|
2012-03-17 23:00:32 +00:00
|
|
|
*clearobjflags = FALSE;
|
2010-12-29 12:53:53 +00:00
|
|
|
return (FALSE);
|
|
|
|
} else {
|
|
|
|
pmap_remove_write(p);
|
|
|
|
return (p->dirty != 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
|
|
|
* vm_object_page_clean
|
|
|
|
*
|
1999-12-12 03:19:33 +00:00
|
|
|
* Clean all dirty pages in the specified range of object. Leaves page
|
|
|
|
* on whatever queue it is currently on. If NOSYNC is set then do not
|
2006-08-13 00:11:09 +00:00
|
|
|
* write out pages with VPO_NOSYNC set (originally comes from MAP_NOSYNC),
|
1999-12-12 03:19:33 +00:00
|
|
|
* leaving the object dirty.
|
1994-05-25 09:21:21 +00:00
|
|
|
*
|
2002-12-28 21:03:42 +00:00
|
|
|
* When stuffing pages asynchronously, allow clustering. XXX we need a
|
|
|
|
* synchronous clustering mode implementation.
|
|
|
|
*
|
1994-05-25 09:21:21 +00:00
|
|
|
* Odd semantics: if start == end, we clean everything.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
2012-03-17 23:00:32 +00:00
|
|
|
*
|
|
|
|
* Returns FALSE if some page from the range was not written, as
|
|
|
|
* reported by the pager, and TRUE otherwise.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2012-03-17 23:00:32 +00:00
|
|
|
boolean_t
|
2011-02-05 21:21:27 +00:00
|
|
|
vm_object_page_clean(vm_object_t object, vm_ooffset_t start, vm_ooffset_t end,
|
2010-07-04 11:26:56 +00:00
|
|
|
int flags)
|
1994-05-25 09:21:21 +00:00
|
|
|
{
|
2010-07-04 11:26:56 +00:00
|
|
|
vm_page_t np, p;
|
2011-02-05 21:21:27 +00:00
|
|
|
vm_pindex_t pi, tend, tstart;
|
2012-03-17 23:00:32 +00:00
|
|
|
int curgeneration, n, pagerflags;
|
|
|
|
boolean_t clearobjflags, eio, res;
|
1994-05-25 09:21:21 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2013-04-28 19:25:09 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The OBJ_MIGHTBEDIRTY flag is only set for OBJT_VNODE
|
|
|
|
* objects. The check below prevents the function from
|
|
|
|
* operating on non-vnode objects.
|
|
|
|
*/
|
2010-07-04 11:26:56 +00:00
|
|
|
if ((object->flags & OBJ_MIGHTBEDIRTY) == 0 ||
|
|
|
|
object->resident_page_count == 0)
|
2012-03-17 23:00:32 +00:00
|
|
|
return (TRUE);
|
1994-05-25 09:21:21 +00:00
|
|
|
|
2010-07-04 11:26:56 +00:00
|
|
|
pagerflags = (flags & (OBJPC_SYNC | OBJPC_INVAL)) != 0 ?
|
|
|
|
VM_PAGER_PUT_SYNC : VM_PAGER_CLUSTER_OK;
|
|
|
|
pagerflags |= (flags & OBJPC_INVAL) != 0 ? VM_PAGER_PUT_INVAL : 0;
|
2010-04-30 22:31:37 +00:00
|
|
|
|
2011-02-05 21:21:27 +00:00
|
|
|
tstart = OFF_TO_IDX(start);
|
|
|
|
tend = (end == 0) ? object->size : OFF_TO_IDX(end + PAGE_MASK);
|
|
|
|
clearobjflags = tstart == 0 && tend >= object->size;
|
2012-03-17 23:00:32 +00:00
|
|
|
res = TRUE;
|
1996-01-19 04:00:31 +00:00
|
|
|
|
|
|
|
rescan:
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
curgeneration = object->generation;
|
|
|
|
|
2011-02-05 21:21:27 +00:00
|
|
|
for (p = vm_page_find_least(object, tstart); p != NULL; p = np) {
|
1996-01-19 04:00:31 +00:00
|
|
|
pi = p->pindex;
|
2010-07-04 11:26:56 +00:00
|
|
|
if (pi >= tend)
|
|
|
|
break;
|
|
|
|
np = TAILQ_NEXT(p, listq);
|
|
|
|
if (p->valid == 0)
|
1995-11-05 20:46:03 +00:00
|
|
|
continue;
|
2013-08-09 11:11:11 +00:00
|
|
|
if (vm_page_sleep_if_busy(p, "vpcwai")) {
|
2012-01-04 16:04:20 +00:00
|
|
|
if (object->generation != curgeneration) {
|
|
|
|
if ((flags & OBJPC_SYNC) != 0)
|
|
|
|
goto rescan;
|
|
|
|
else
|
2012-03-17 23:00:32 +00:00
|
|
|
clearobjflags = FALSE;
|
2012-01-04 16:04:20 +00:00
|
|
|
}
|
2010-11-24 12:25:17 +00:00
|
|
|
np = vm_page_find_least(object, pi);
|
|
|
|
continue;
|
1995-04-09 06:03:56 +00:00
|
|
|
}
|
2010-12-29 12:53:53 +00:00
|
|
|
if (!vm_object_page_remove_write(p, flags, &clearobjflags))
|
1995-11-05 20:46:03 +00:00
|
|
|
continue;
|
2010-07-04 11:26:56 +00:00
|
|
|
|
2010-12-29 12:53:53 +00:00
|
|
|
n = vm_object_page_collect_flush(object, p, pagerflags,
|
2012-03-17 23:00:32 +00:00
|
|
|
flags, &clearobjflags, &eio);
|
|
|
|
if (eio) {
|
|
|
|
res = FALSE;
|
|
|
|
clearobjflags = FALSE;
|
|
|
|
}
|
2012-01-04 16:04:20 +00:00
|
|
|
if (object->generation != curgeneration) {
|
|
|
|
if ((flags & OBJPC_SYNC) != 0)
|
|
|
|
goto rescan;
|
|
|
|
else
|
2012-03-17 23:00:32 +00:00
|
|
|
clearobjflags = FALSE;
|
2012-01-04 16:04:20 +00:00
|
|
|
}
|
2011-06-01 21:00:28 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the VOP_PUTPAGES() did a truncated write, so
|
|
|
|
* that even the first page of the run is not fully
|
|
|
|
* written, vm_pageout_flush() returns 0 as the run
|
|
|
|
* length. Since the condition that caused truncated
|
|
|
|
* write may be permanent, e.g. exhausted free space,
|
|
|
|
* accepting n == 0 would cause an infinite loop.
|
|
|
|
*
|
|
|
|
* Forwarding the iterator leaves the unwritten page
|
|
|
|
* behind, but there is not much we can do there if
|
|
|
|
* filesystem refuses to write it.
|
|
|
|
*/
|
2012-03-17 23:00:32 +00:00
|
|
|
if (n == 0) {
|
2011-06-01 21:00:28 +00:00
|
|
|
n = 1;
|
2012-03-17 23:00:32 +00:00
|
|
|
clearobjflags = FALSE;
|
|
|
|
}
|
2010-07-04 11:26:56 +00:00
|
|
|
np = vm_page_find_least(object, pi + n);
|
2002-03-06 02:42:56 +00:00
|
|
|
}
|
|
|
|
#if 0
|
2010-07-04 11:26:56 +00:00
|
|
|
VOP_FSYNC(vp, (pagerflags & VM_PAGER_PUT_SYNC) ? MNT_WAIT : 0);
|
2002-03-06 02:42:56 +00:00
|
|
|
#endif
|
|
|
|
|
2011-01-01 17:39:38 +00:00
|
|
|
if (clearobjflags)
|
2010-12-29 12:53:53 +00:00
|
|
|
vm_object_clear_flag(object, OBJ_MIGHTBEDIRTY);
|
2012-03-17 23:00:32 +00:00
|
|
|
return (res);
|
2002-03-06 02:42:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2010-12-29 12:53:53 +00:00
|
|
|
vm_object_page_collect_flush(vm_object_t object, vm_page_t p, int pagerflags,
|
2012-03-17 23:00:32 +00:00
|
|
|
int flags, boolean_t *clearobjflags, boolean_t *eio)
|
2002-03-06 02:42:56 +00:00
|
|
|
{
|
2010-11-21 10:18:28 +00:00
|
|
|
vm_page_t ma[vm_pageout_page_count], p_first, tp;
|
|
|
|
int count, i, mreq, runlen;
|
2002-03-06 02:42:56 +00:00
|
|
|
|
2010-04-30 22:31:37 +00:00
|
|
|
vm_page_lock_assert(p, MA_NOTOWNED);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2010-11-21 10:18:28 +00:00
|
|
|
|
|
|
|
count = 1;
|
|
|
|
mreq = 0;
|
|
|
|
|
|
|
|
for (tp = p; count < vm_pageout_page_count; count++) {
|
|
|
|
tp = vm_page_next(tp);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (tp == NULL || vm_page_busied(tp))
|
2010-07-04 11:26:56 +00:00
|
|
|
break;
|
2010-12-29 12:53:53 +00:00
|
|
|
if (!vm_object_page_remove_write(tp, flags, clearobjflags))
|
2010-07-04 11:26:56 +00:00
|
|
|
break;
|
2002-03-06 02:42:56 +00:00
|
|
|
}
|
|
|
|
|
2010-11-21 10:18:28 +00:00
|
|
|
for (p_first = p; count < vm_pageout_page_count; count++) {
|
|
|
|
tp = vm_page_prev(p_first);
|
2013-08-09 11:11:11 +00:00
|
|
|
if (tp == NULL || vm_page_busied(tp))
|
1996-01-19 04:00:31 +00:00
|
|
|
break;
|
2010-12-29 12:53:53 +00:00
|
|
|
if (!vm_object_page_remove_write(tp, flags, clearobjflags))
|
2010-07-04 11:26:56 +00:00
|
|
|
break;
|
2010-11-21 10:18:28 +00:00
|
|
|
p_first = tp;
|
|
|
|
mreq++;
|
2002-03-06 02:42:56 +00:00
|
|
|
}
|
1995-04-09 06:03:56 +00:00
|
|
|
|
2010-11-21 10:18:28 +00:00
|
|
|
for (tp = p_first, i = 0; i < count; tp = TAILQ_NEXT(tp, listq), i++)
|
|
|
|
ma[i] = tp;
|
2010-11-18 21:09:02 +00:00
|
|
|
|
2012-03-17 23:00:32 +00:00
|
|
|
vm_pageout_flush(ma, count, pagerflags, mreq, &runlen, eio);
|
2010-11-18 21:09:02 +00:00
|
|
|
return (runlen);
|
1995-04-09 06:03:56 +00:00
|
|
|
}
|
|
|
|
|
2003-11-09 05:25:35 +00:00
|
|
|
/*
|
|
|
|
* Note that there is absolutely no sense in writing out
|
|
|
|
* anonymous objects, so we track down the vnode object
|
|
|
|
* to write out.
|
|
|
|
* We invalidate (remove) all pages from the address space
|
|
|
|
* for semantic correctness.
|
|
|
|
*
|
2011-06-29 16:40:41 +00:00
|
|
|
* If the backing object is a device object with unmanaged pages, then any
|
|
|
|
* mappings to the specified range of pages must be removed before this
|
|
|
|
* function is called.
|
|
|
|
*
|
2003-11-09 05:25:35 +00:00
|
|
|
* Note: certain anonymous maps, such as MAP_NOSYNC maps,
|
|
|
|
* may start out with a NULL object.
|
|
|
|
*/
|
2012-03-17 23:00:32 +00:00
|
|
|
boolean_t
|
2003-11-09 05:25:35 +00:00
|
|
|
vm_object_sync(vm_object_t object, vm_ooffset_t offset, vm_size_t size,
|
|
|
|
boolean_t syncio, boolean_t invalidate)
|
|
|
|
{
|
|
|
|
vm_object_t backing_object;
|
|
|
|
struct vnode *vp;
|
2006-03-02 22:13:28 +00:00
|
|
|
struct mount *mp;
|
2012-03-17 23:00:32 +00:00
|
|
|
int error, flags, fsync_after;
|
|
|
|
boolean_t res;
|
2003-11-09 05:25:35 +00:00
|
|
|
|
|
|
|
if (object == NULL)
|
2012-03-17 23:00:32 +00:00
|
|
|
return (TRUE);
|
|
|
|
res = TRUE;
|
|
|
|
error = 0;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2003-11-09 05:25:35 +00:00
|
|
|
while ((backing_object = object->backing_object) != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(backing_object);
|
2004-07-28 18:23:08 +00:00
|
|
|
offset += object->backing_object_offset;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2003-11-09 05:25:35 +00:00
|
|
|
object = backing_object;
|
|
|
|
if (object->size < OFF_TO_IDX(offset + size))
|
|
|
|
size = IDX_TO_OFF(object->size) - offset;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Flush pages if writing is allowed, invalidate them
|
|
|
|
* if invalidation requested. Pages undergoing I/O
|
|
|
|
* will be ignored by vm_object_page_remove().
|
|
|
|
*
|
|
|
|
* We cannot lock the vnode and then wait for paging
|
|
|
|
* to complete without deadlocking against vm_fault.
|
|
|
|
* Instead we simply call vm_object_page_remove() and
|
|
|
|
* allow it to block internally on a page-by-page
|
|
|
|
* basis when it encounters pages undergoing async
|
|
|
|
* I/O.
|
|
|
|
*/
|
|
|
|
if (object->type == OBJT_VNODE &&
|
2017-09-19 16:46:37 +00:00
|
|
|
(object->flags & OBJ_MIGHTBEDIRTY) != 0 &&
|
|
|
|
((vp = object->handle)->v_vflag & VV_NOSYNC) == 0) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2006-03-02 22:13:28 +00:00
|
|
|
(void) vn_start_write(vp, &mp, V_WAIT);
|
2008-01-10 01:10:58 +00:00
|
|
|
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
|
2011-12-23 09:09:42 +00:00
|
|
|
if (syncio && !invalidate && offset == 0 &&
|
2017-03-14 19:39:17 +00:00
|
|
|
atop(size) == object->size) {
|
2011-12-23 09:09:42 +00:00
|
|
|
/*
|
|
|
|
* If syncing the whole mapping of the file,
|
|
|
|
* it is faster to schedule all the writes in
|
|
|
|
* async mode, also allowing the clustering,
|
|
|
|
* and then wait for i/o to complete.
|
|
|
|
*/
|
|
|
|
flags = 0;
|
|
|
|
fsync_after = TRUE;
|
|
|
|
} else {
|
|
|
|
flags = (syncio || invalidate) ? OBJPC_SYNC : 0;
|
|
|
|
flags |= invalidate ? (OBJPC_SYNC | OBJPC_INVAL) : 0;
|
|
|
|
fsync_after = FALSE;
|
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2012-03-17 23:00:32 +00:00
|
|
|
res = vm_object_page_clean(object, offset, offset + size,
|
|
|
|
flags);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2011-12-23 09:09:42 +00:00
|
|
|
if (fsync_after)
|
2012-03-17 23:00:32 +00:00
|
|
|
error = VOP_FSYNC(vp, MNT_WAIT, curthread);
|
2008-01-13 14:44:15 +00:00
|
|
|
VOP_UNLOCK(vp, 0);
|
2006-03-02 22:13:28 +00:00
|
|
|
vn_finished_write(mp);
|
2012-03-17 23:00:32 +00:00
|
|
|
if (error != 0)
|
|
|
|
res = FALSE;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2003-11-09 05:25:35 +00:00
|
|
|
}
|
|
|
|
if ((object->type == OBJT_VNODE ||
|
|
|
|
object->type == OBJT_DEVICE) && invalidate) {
|
2011-06-29 16:40:41 +00:00
|
|
|
if (object->type == OBJT_DEVICE)
|
|
|
|
/*
|
|
|
|
* The option OBJPR_NOTMAPPED must be passed here
|
|
|
|
* because vm_object_page_remove() cannot remove
|
|
|
|
* unmanaged mappings.
|
|
|
|
*/
|
|
|
|
flags = OBJPR_NOTMAPPED;
|
|
|
|
else if (old_msync)
|
Revert r173708's modifications to vm_object_page_remove().
Assume that a vnode is mapped shared and mlocked(), and then the vnode
is truncated, or truncated and then again extended past the mapping
point EOF. Truncation removes the pages past the truncation point,
and if pages are later created at this range, they are not properly
mapped into the mlocked region, and their wiring count is wrong.
The revert leaves the invalidated but wired pages on the object queue,
which means that the pages are found by vm_object_unwire() when the
mapped range is munlock()ed, and reused by the buffer cache when the
vnode is extended again.
The changes in r173708 were required since then vm_map_unwire() looked
at the page tables to find the page to unwire. This is no longer
needed with the vm_object_unwire() introduction, which follows the
objects shadow chain.
Also eliminate OBJPR_NOTWIRED flag for vm_object_page_remove(), which
is now redundand, we do not remove wired pages.
Reported by: trasz, Dmitry Sivachenko <trtrmitya@gmail.com>
Suggested and reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
2015-07-25 18:29:06 +00:00
|
|
|
flags = 0;
|
2011-06-29 16:40:41 +00:00
|
|
|
else
|
Revert r173708's modifications to vm_object_page_remove().
Assume that a vnode is mapped shared and mlocked(), and then the vnode
is truncated, or truncated and then again extended past the mapping
point EOF. Truncation removes the pages past the truncation point,
and if pages are later created at this range, they are not properly
mapped into the mlocked region, and their wiring count is wrong.
The revert leaves the invalidated but wired pages on the object queue,
which means that the pages are found by vm_object_unwire() when the
mapped range is munlock()ed, and reused by the buffer cache when the
vnode is extended again.
The changes in r173708 were required since then vm_map_unwire() looked
at the page tables to find the page to unwire. This is no longer
needed with the vm_object_unwire() introduction, which follows the
objects shadow chain.
Also eliminate OBJPR_NOTWIRED flag for vm_object_page_remove(), which
is now redundand, we do not remove wired pages.
Reported by: trasz, Dmitry Sivachenko <trtrmitya@gmail.com>
Suggested and reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
2015-07-25 18:29:06 +00:00
|
|
|
flags = OBJPR_CLEANONLY;
|
2011-06-29 16:40:41 +00:00
|
|
|
vm_object_page_remove(object, OFF_TO_IDX(offset),
|
|
|
|
OFF_TO_IDX(offset + size + PAGE_MASK), flags);
|
2003-11-09 05:25:35 +00:00
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2012-03-17 23:00:32 +00:00
|
|
|
return (res);
|
2003-11-09 05:25:35 +00:00
|
|
|
}
|
|
|
|
|
2017-01-30 18:51:43 +00:00
|
|
|
/*
|
|
|
|
* Determine whether the given advice can be applied to the object. Advice is
|
|
|
|
* not applied to unmanaged pages since they never belong to page queues, and
|
|
|
|
* since MADV_FREE is destructive, it can apply only to anonymous pages that
|
|
|
|
* have been mapped at most once.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
vm_object_advice_applies(vm_object_t object, int advice)
|
|
|
|
{
|
|
|
|
|
|
|
|
if ((object->flags & OBJ_UNMANAGED) != 0)
|
|
|
|
return (false);
|
|
|
|
if (advice != MADV_FREE)
|
|
|
|
return (true);
|
|
|
|
return ((object->type == OBJT_DEFAULT || object->type == OBJT_SWAP) &&
|
|
|
|
(object->flags & OBJ_ONEMAPPING) != 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_object_madvise_freespace(vm_object_t object, int advice, vm_pindex_t pindex,
|
|
|
|
vm_size_t size)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (advice == MADV_FREE && object->type == OBJT_SWAP)
|
|
|
|
swap_pager_freespace(object, pindex, size);
|
|
|
|
}
|
|
|
|
|
1996-05-19 07:36:50 +00:00
|
|
|
/*
|
|
|
|
* vm_object_madvise:
|
|
|
|
*
|
|
|
|
* Implements the madvise function at the object/page level.
|
1999-01-21 08:29:12 +00:00
|
|
|
*
|
1999-08-12 08:22:57 +00:00
|
|
|
* MADV_WILLNEED (any object)
|
|
|
|
*
|
|
|
|
* Activate the specified pages if they are resident.
|
|
|
|
*
|
|
|
|
* MADV_DONTNEED (any object)
|
|
|
|
*
|
|
|
|
* Deactivate the specified pages if they are resident.
|
|
|
|
*
|
|
|
|
* MADV_FREE (OBJT_DEFAULT/OBJT_SWAP objects,
|
|
|
|
* OBJ_ONEMAPPING only)
|
|
|
|
*
|
|
|
|
* Deactivate and clean the specified pages if they are
|
|
|
|
* resident. This permits the process to reuse the pages
|
|
|
|
* without faulting or the kernel to reclaim the pages
|
|
|
|
* without I/O.
|
1996-05-19 07:36:50 +00:00
|
|
|
*/
|
|
|
|
void
|
2012-03-19 18:47:34 +00:00
|
|
|
vm_object_madvise(vm_object_t object, vm_pindex_t pindex, vm_pindex_t end,
|
2017-01-15 03:50:08 +00:00
|
|
|
int advice)
|
1996-05-19 07:36:50 +00:00
|
|
|
{
|
2012-03-19 18:47:34 +00:00
|
|
|
vm_pindex_t tpindex;
|
2003-05-31 19:40:57 +00:00
|
|
|
vm_object_t backing_object, tobject;
|
2017-01-30 18:51:43 +00:00
|
|
|
vm_page_t m, tm;
|
1996-05-19 07:36:50 +00:00
|
|
|
|
|
|
|
if (object == NULL)
|
|
|
|
return;
|
2017-01-15 03:50:08 +00:00
|
|
|
|
1997-01-20 02:25:14 +00:00
|
|
|
relookup:
|
2017-01-30 18:51:43 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
|
|
|
if (!vm_object_advice_applies(object, advice)) {
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
for (m = vm_page_find_least(object, pindex); pindex < end; pindex++) {
|
1997-01-20 02:25:14 +00:00
|
|
|
tobject = object;
|
2017-01-15 03:50:08 +00:00
|
|
|
|
|
|
|
/*
|
2017-01-30 18:51:43 +00:00
|
|
|
* If the next page isn't resident in the top-level object, we
|
|
|
|
* need to search the shadow chain. When applying MADV_FREE, we
|
|
|
|
* take care to release any swap space used to store
|
|
|
|
* non-resident pages.
|
2017-01-15 03:50:08 +00:00
|
|
|
*/
|
2017-01-30 18:51:43 +00:00
|
|
|
if (m == NULL || pindex < m->pindex) {
|
1999-02-15 02:03:40 +00:00
|
|
|
/*
|
2017-01-30 18:51:43 +00:00
|
|
|
* Optimize a common case: if the top-level object has
|
|
|
|
* no backing object, we can skip over the non-resident
|
|
|
|
* range in constant time.
|
1999-02-15 02:03:40 +00:00
|
|
|
*/
|
2017-01-30 18:51:43 +00:00
|
|
|
if (object->backing_object == NULL) {
|
|
|
|
tpindex = (m != NULL && m->pindex < end) ?
|
|
|
|
m->pindex : end;
|
|
|
|
vm_object_madvise_freespace(object, advice,
|
|
|
|
pindex, tpindex - pindex);
|
|
|
|
if ((pindex = tpindex) == end)
|
|
|
|
break;
|
|
|
|
goto next_page;
|
|
|
|
}
|
|
|
|
|
|
|
|
tpindex = pindex;
|
|
|
|
do {
|
|
|
|
vm_object_madvise_freespace(tobject, advice,
|
|
|
|
tpindex, 1);
|
|
|
|
/*
|
|
|
|
* Prepare to search the next object in the
|
|
|
|
* chain.
|
|
|
|
*/
|
|
|
|
backing_object = tobject->backing_object;
|
|
|
|
if (backing_object == NULL)
|
|
|
|
goto next_pindex;
|
|
|
|
VM_OBJECT_WLOCK(backing_object);
|
|
|
|
tpindex +=
|
|
|
|
OFF_TO_IDX(tobject->backing_object_offset);
|
|
|
|
if (tobject != object)
|
|
|
|
VM_OBJECT_WUNLOCK(tobject);
|
|
|
|
tobject = backing_object;
|
|
|
|
if (!vm_object_advice_applies(tobject, advice))
|
|
|
|
goto next_pindex;
|
|
|
|
} while ((tm = vm_page_lookup(tobject, tpindex)) ==
|
|
|
|
NULL);
|
|
|
|
} else {
|
|
|
|
next_page:
|
|
|
|
tm = m;
|
|
|
|
m = TAILQ_NEXT(m, listq);
|
2017-01-15 03:50:08 +00:00
|
|
|
}
|
|
|
|
|
1996-05-19 07:36:50 +00:00
|
|
|
/*
|
2010-04-28 04:57:32 +00:00
|
|
|
* If the page is not in a normal state, skip it.
|
1996-05-19 07:36:50 +00:00
|
|
|
*/
|
2017-01-30 18:51:43 +00:00
|
|
|
if (tm->valid != VM_PAGE_BITS_ALL)
|
|
|
|
goto next_pindex;
|
|
|
|
vm_page_lock(tm);
|
Dequeue wired pages lazily.
Previously, wiring a page would cause it to be removed from its page
queue. In the common case, unwiring causes it to be enqueued at the tail
of that page queue. This change modifies vm_page_wire() to not dequeue
the page, thus avoiding the highly contended page queue locks. Instead,
vm_page_unwire() takes care of requeuing the page as a single operation,
and the page daemon dequeues wired pages as they are encountered during
a queue scan to avoid needlessly revisiting them later. For pages in
PQ_ACTIVE we do even better, since a requeue is unnecessary.
The change improves scalability for some common workloads. For instance,
threads wiring pages into the buffer cache no longer need to modify
global page queues, and unwiring is usually done by the bufspace thread,
so concurrency is not as much of an issue. As another example, many
sysctl handlers wire the output buffer to avoid faults on copyout, and
since the buffer is likely to be in PQ_ACTIVE, we now entirely avoid
modifying the page queue in this case.
The change also adds a block comment describing some properties of
struct vm_page's reference counters, and the busy lock.
Reviewed by: jeff
Discussed with: alc, kib
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D11943
2018-02-07 16:57:10 +00:00
|
|
|
if (vm_page_held(tm)) {
|
2017-01-30 18:51:43 +00:00
|
|
|
vm_page_unlock(tm);
|
|
|
|
goto next_pindex;
|
1997-01-20 02:25:14 +00:00
|
|
|
}
|
2017-01-30 18:51:43 +00:00
|
|
|
KASSERT((tm->flags & PG_FICTITIOUS) == 0,
|
|
|
|
("vm_object_madvise: page %p is fictitious", tm));
|
|
|
|
KASSERT((tm->oflags & VPO_UNMANAGED) == 0,
|
|
|
|
("vm_object_madvise: page %p is not managed", tm));
|
|
|
|
if (vm_page_busied(tm)) {
|
|
|
|
if (object != tobject)
|
|
|
|
VM_OBJECT_WUNLOCK(tobject);
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2017-01-15 03:50:08 +00:00
|
|
|
if (advice == MADV_WILLNEED) {
|
2010-04-17 21:14:37 +00:00
|
|
|
/*
|
|
|
|
* Reference the page before unlocking and
|
|
|
|
* sleeping so that the page daemon is less
|
2017-01-30 18:51:43 +00:00
|
|
|
* likely to reclaim it.
|
2010-04-17 21:14:37 +00:00
|
|
|
*/
|
2017-01-30 18:51:43 +00:00
|
|
|
vm_page_aflag_set(tm, PGA_REFERENCED);
|
Roughly half of a typical pmap_mincore() implementation is machine-
independent code. Move this code into mincore(), and eliminate the
page queues lock from pmap_mincore().
Push down the page queues lock into pmap_clear_modify(),
pmap_clear_reference(), and pmap_is_modified(). Assert that these
functions are never passed an unmanaged page.
Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m:
Contrary to what the comment says, pmap_mincore() is not simply an
optimization. Without a complete pmap_mincore() implementation,
mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED
because only the pmap can provide this information.
Eliminate the page queues lock from vfs_setdirty_locked_object(),
vm_pageout_clean(), vm_object_page_collect_flush(), and
vm_object_page_clean(). Generally speaking, these are all accesses
to the page's dirty field, which are synchronized by the containing
vm object's lock.
Reduce the scope of the page queues lock in vm_object_madvise() and
vm_page_dontneed().
Reviewed by: kib (an earlier version)
2010-05-24 14:26:57 +00:00
|
|
|
}
|
2017-01-30 18:51:43 +00:00
|
|
|
vm_page_busy_sleep(tm, "madvpo", false);
|
1998-03-01 04:18:54 +00:00
|
|
|
goto relookup;
|
2003-05-31 19:40:57 +00:00
|
|
|
}
|
2017-01-30 18:51:43 +00:00
|
|
|
vm_page_advise(tm, advice);
|
|
|
|
vm_page_unlock(tm);
|
|
|
|
vm_object_madvise_freespace(tobject, advice, tm->pindex, 1);
|
|
|
|
next_pindex:
|
2004-08-29 20:14:10 +00:00
|
|
|
if (tobject != object)
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(tobject);
|
2017-01-15 03:50:08 +00:00
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_object_shadow:
|
|
|
|
*
|
|
|
|
* Create a new object which is backed by the
|
|
|
|
* specified existing object range. The source
|
|
|
|
* object reference is deallocated.
|
|
|
|
*
|
|
|
|
* The new object and offset into that object
|
|
|
|
* are returned in the source parameters.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_shadow(
|
|
|
|
vm_object_t *object, /* IN/OUT */
|
|
|
|
vm_ooffset_t *offset, /* IN/OUT */
|
|
|
|
vm_size_t length)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1999-02-08 05:15:54 +00:00
|
|
|
vm_object_t source;
|
|
|
|
vm_object_t result;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
source = *object;
|
|
|
|
|
1999-05-28 03:39:44 +00:00
|
|
|
/*
|
|
|
|
* Don't create the new object if the old object isn't shared.
|
|
|
|
*/
|
2003-04-27 05:43:03 +00:00
|
|
|
if (source != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(source);
|
2003-04-27 05:43:03 +00:00
|
|
|
if (source->ref_count == 1 &&
|
|
|
|
source->handle == NULL &&
|
|
|
|
(source->type == OBJT_DEFAULT ||
|
|
|
|
source->type == OBJT_SWAP)) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(source);
|
2003-04-27 05:43:03 +00:00
|
|
|
return;
|
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(source);
|
2002-05-31 03:48:55 +00:00
|
|
|
}
|
1999-05-28 03:39:44 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2003-04-27 05:43:03 +00:00
|
|
|
* Allocate a new object with the given length.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2011-02-04 21:49:24 +00:00
|
|
|
result = vm_object_allocate(OBJT_DEFAULT, atop(length));
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* The new object shadows the source object, adding a reference to it.
|
|
|
|
* Our caller changes his reference to point to the new object,
|
|
|
|
* removing a reference to the source object. Net result: no change
|
|
|
|
* of reference count.
|
1999-02-07 08:44:53 +00:00
|
|
|
*
|
|
|
|
* Try to optimize the result object's page color when shadowing
|
2000-03-26 15:20:23 +00:00
|
|
|
* in order to maintain page coloring consistency in the combined
|
1999-02-07 08:44:53 +00:00
|
|
|
* shadowed object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
result->backing_object = source;
|
2004-06-28 20:26:35 +00:00
|
|
|
/*
|
|
|
|
* Store the offset into the source object, and fix up the offset into
|
|
|
|
* the new object.
|
|
|
|
*/
|
|
|
|
result->backing_object_offset = *offset;
|
2003-04-27 05:43:03 +00:00
|
|
|
if (source != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(source);
|
2018-01-12 22:48:23 +00:00
|
|
|
result->domain = source->domain;
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_INSERT_HEAD(&source->shadow_head, result, shadow_list);
|
1998-01-31 11:56:53 +00:00
|
|
|
source->shadow_count++;
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
2009-02-08 22:17:24 +00:00
|
|
|
result->flags |= source->flags & OBJ_COLORED;
|
2007-12-29 19:53:04 +00:00
|
|
|
result->pg_color = (source->pg_color + OFF_TO_IDX(*offset)) &
|
|
|
|
((1 << (VM_NFREEORDER - 1)) - 1);
|
|
|
|
#endif
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(source);
|
1996-03-02 02:54:24 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Return the new things
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
*offset = 0;
|
|
|
|
*object = result;
|
|
|
|
}
|
|
|
|
|
2002-06-02 23:54:09 +00:00
|
|
|
/*
|
|
|
|
* vm_object_split:
|
|
|
|
*
|
|
|
|
* Split the pages in a map entry into a new object. This affords
|
|
|
|
* easier removal of unused pages, and keeps object inheritance from
|
|
|
|
* being a negative impact on memory usage.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_object_split(vm_map_entry_t entry)
|
|
|
|
{
|
2006-12-17 20:14:43 +00:00
|
|
|
vm_page_t m, m_next;
|
2002-06-02 23:54:09 +00:00
|
|
|
vm_object_t orig_object, new_object, source;
|
2006-12-17 20:14:43 +00:00
|
|
|
vm_pindex_t idx, offidxstart;
|
|
|
|
vm_size_t size;
|
2002-06-02 23:54:09 +00:00
|
|
|
|
|
|
|
orig_object = entry->object.vm_object;
|
|
|
|
if (orig_object->type != OBJT_DEFAULT && orig_object->type != OBJT_SWAP)
|
|
|
|
return;
|
|
|
|
if (orig_object->ref_count <= 1)
|
|
|
|
return;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(orig_object);
|
2002-06-02 23:54:09 +00:00
|
|
|
|
2003-12-30 22:28:36 +00:00
|
|
|
offidxstart = OFF_TO_IDX(entry->offset);
|
2006-12-16 08:17:07 +00:00
|
|
|
size = atop(entry->end - entry->start);
|
2002-06-02 23:54:09 +00:00
|
|
|
|
2003-12-30 22:28:36 +00:00
|
|
|
/*
|
|
|
|
* If swap_pager_copy() is later called, it will convert new_object
|
|
|
|
* into a swap object.
|
|
|
|
*/
|
|
|
|
new_object = vm_object_allocate(OBJT_DEFAULT, size);
|
2002-06-02 23:54:09 +00:00
|
|
|
|
2007-03-22 07:02:43 +00:00
|
|
|
/*
|
|
|
|
* At this point, the new object is still private, so the order in
|
|
|
|
* which the original and new objects are locked does not matter.
|
|
|
|
*/
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(new_object);
|
|
|
|
VM_OBJECT_WLOCK(orig_object);
|
2018-01-12 22:48:23 +00:00
|
|
|
new_object->domain = orig_object->domain;
|
2002-06-02 23:54:09 +00:00
|
|
|
source = orig_object->backing_object;
|
|
|
|
if (source != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(source);
|
Prevent a race between vm_object_collapse() and vm_object_split() from
causing a crash.
Suppose that we have two objects, obj and backing_obj, where
backing_obj is obj's backing object. Further, suppose that
backing_obj has a reference count of two. One being the reference
held by obj and the other by a map entry. Now, suppose that the map
entry is deallocated and its reference removed by
vm_object_deallocate(). vm_object_deallocate() recognizes that the
only remaining reference is from a shadow object, obj, and calls
vm_object_collapse() on obj. vm_object_collapse() executes
if (backing_object->ref_count == 1) {
/*
* If there is exactly one reference to the backing
* object, we can collapse it into the parent.
*/
vm_object_backing_scan(object, OBSC_COLLAPSE_WAIT);
vm_object_backing_scan(OBSC_COLLAPSE_WAIT) executes
if (op & OBSC_COLLAPSE_WAIT) {
vm_object_set_flag(backing_object, OBJ_DEAD);
}
Finally, suppose that either vm_object_backing_scan() or
vm_object_collapse() sleeps releasing its locks. At this instant,
another thread executes vm_object_split(). It crashes in
vm_object_reference_locked() on the assertion that the object is not
dead. If, however, assertions are not enabled, it crashes much later,
after the object has been recycled, in vm_object_deallocate() because
the shadow count and shadow list are inconsistent.
Reviewed by: tegge
Reported by: jhb
MFC after: 1 week
2007-03-27 08:55:17 +00:00
|
|
|
if ((source->flags & OBJ_DEAD) != 0) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(source);
|
|
|
|
VM_OBJECT_WUNLOCK(orig_object);
|
|
|
|
VM_OBJECT_WUNLOCK(new_object);
|
Prevent a race between vm_object_collapse() and vm_object_split() from
causing a crash.
Suppose that we have two objects, obj and backing_obj, where
backing_obj is obj's backing object. Further, suppose that
backing_obj has a reference count of two. One being the reference
held by obj and the other by a map entry. Now, suppose that the map
entry is deallocated and its reference removed by
vm_object_deallocate(). vm_object_deallocate() recognizes that the
only remaining reference is from a shadow object, obj, and calls
vm_object_collapse() on obj. vm_object_collapse() executes
if (backing_object->ref_count == 1) {
/*
* If there is exactly one reference to the backing
* object, we can collapse it into the parent.
*/
vm_object_backing_scan(object, OBSC_COLLAPSE_WAIT);
vm_object_backing_scan(OBSC_COLLAPSE_WAIT) executes
if (op & OBSC_COLLAPSE_WAIT) {
vm_object_set_flag(backing_object, OBJ_DEAD);
}
Finally, suppose that either vm_object_backing_scan() or
vm_object_collapse() sleeps releasing its locks. At this instant,
another thread executes vm_object_split(). It crashes in
vm_object_reference_locked() on the assertion that the object is not
dead. If, however, assertions are not enabled, it crashes much later,
after the object has been recycled, in vm_object_deallocate() because
the shadow count and shadow list are inconsistent.
Reviewed by: tegge
Reported by: jhb
MFC after: 1 week
2007-03-27 08:55:17 +00:00
|
|
|
vm_object_deallocate(new_object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(orig_object);
|
Prevent a race between vm_object_collapse() and vm_object_split() from
causing a crash.
Suppose that we have two objects, obj and backing_obj, where
backing_obj is obj's backing object. Further, suppose that
backing_obj has a reference count of two. One being the reference
held by obj and the other by a map entry. Now, suppose that the map
entry is deallocated and its reference removed by
vm_object_deallocate(). vm_object_deallocate() recognizes that the
only remaining reference is from a shadow object, obj, and calls
vm_object_collapse() on obj. vm_object_collapse() executes
if (backing_object->ref_count == 1) {
/*
* If there is exactly one reference to the backing
* object, we can collapse it into the parent.
*/
vm_object_backing_scan(object, OBSC_COLLAPSE_WAIT);
vm_object_backing_scan(OBSC_COLLAPSE_WAIT) executes
if (op & OBSC_COLLAPSE_WAIT) {
vm_object_set_flag(backing_object, OBJ_DEAD);
}
Finally, suppose that either vm_object_backing_scan() or
vm_object_collapse() sleeps releasing its locks. At this instant,
another thread executes vm_object_split(). It crashes in
vm_object_reference_locked() on the assertion that the object is not
dead. If, however, assertions are not enabled, it crashes much later,
after the object has been recycled, in vm_object_deallocate() because
the shadow count and shadow list are inconsistent.
Reviewed by: tegge
Reported by: jhb
MFC after: 1 week
2007-03-27 08:55:17 +00:00
|
|
|
return;
|
|
|
|
}
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_INSERT_HEAD(&source->shadow_head,
|
2002-06-02 23:54:09 +00:00
|
|
|
new_object, shadow_list);
|
2003-05-01 05:06:33 +00:00
|
|
|
source->shadow_count++;
|
2003-11-02 21:30:10 +00:00
|
|
|
vm_object_reference_locked(source); /* for new_object */
|
2002-06-02 23:54:09 +00:00
|
|
|
vm_object_clear_flag(source, OBJ_ONEMAPPING);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(source);
|
2002-06-02 23:54:09 +00:00
|
|
|
new_object->backing_object_offset =
|
2003-12-30 22:28:36 +00:00
|
|
|
orig_object->backing_object_offset + entry->offset;
|
2002-06-02 23:54:09 +00:00
|
|
|
new_object->backing_object = source;
|
|
|
|
}
|
2010-12-02 17:37:16 +00:00
|
|
|
if (orig_object->cred != NULL) {
|
|
|
|
new_object->cred = orig_object->cred;
|
|
|
|
crhold(orig_object->cred);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
new_object->charge = ptoa(size);
|
|
|
|
KASSERT(orig_object->charge >= ptoa(size),
|
|
|
|
("orig_object->charge < 0"));
|
|
|
|
orig_object->charge -= ptoa(size);
|
|
|
|
}
|
2006-12-17 20:14:43 +00:00
|
|
|
retry:
|
2010-07-04 11:13:33 +00:00
|
|
|
m = vm_page_find_least(orig_object, offidxstart);
|
2006-12-17 20:14:43 +00:00
|
|
|
for (; m != NULL && (idx = m->pindex - offidxstart) < size;
|
|
|
|
m = m_next) {
|
|
|
|
m_next = TAILQ_NEXT(m, listq);
|
2002-06-02 23:54:09 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We must wait for pending I/O to complete before we can
|
|
|
|
* rename the page.
|
|
|
|
*
|
|
|
|
* We do not have to VM_PROT_NONE the page as mappings should
|
|
|
|
* not be changed by this operation.
|
|
|
|
*/
|
2013-08-09 11:11:11 +00:00
|
|
|
if (vm_page_busied(m)) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(new_object);
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_lock(m);
|
|
|
|
VM_OBJECT_WUNLOCK(orig_object);
|
Fix a race in vm_page_busy_sleep(9).
Suppose that we have an exclusively busy page, and a thread which can
accept shared-busy page. In this case, typical code waiting for the
page xbusy state to pass is
again:
VM_OBJECT_WLOCK(object);
...
if (vm_page_xbusied(m)) {
vm_page_lock(m);
VM_OBJECT_WUNLOCK(object); <---1
vm_page_busy_sleep(p, "vmopax");
goto again;
}
Suppose that the xbusy state owner locked the object, unbusied the
page and unlocked the object after we are at the line [1], but before we
executed the load of the busy_lock word in vm_page_busy_sleep(). If it
happens that there is still no waiters recorded for the busy state,
the xbusy owner did not acquired the page lock, so it proceeded.
More, suppose that some other thread happen to share-busy the page
after xbusy state was relinquished but before the m->busy_lock is read
in vm_page_busy_sleep(). Again, that thread only needs vm_object lock
to proceed. Then, vm_page_busy_sleep() reads busy_lock value equal to
the VPB_SHARERS_WORD(1).
In this case, all tests in vm_page_busy_sleep(9) pass and we are going
to sleep, despite the page being share-busied.
Update check for m->busy_lock == VPB_UNBUSIED in vm_page_busy_sleep(9)
to also accept shared-busy state if we only wait for the xbusy state to
pass.
Merge sequential if()s with the same 'then' clause in
vm_page_busy_sleep().
Note that the current code does not share-busy pages from parallel
threads, the only way to have more that one sbusy owner is right now
is to recurse.
Reported and tested by: pho (previous version)
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D8196
2016-10-13 14:41:05 +00:00
|
|
|
vm_page_busy_sleep(m, "spltwt", false);
|
2013-08-09 11:11:11 +00:00
|
|
|
VM_OBJECT_WLOCK(orig_object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(new_object);
|
2002-06-02 23:54:09 +00:00
|
|
|
goto retry;
|
2003-11-01 04:54:23 +00:00
|
|
|
}
|
2013-08-09 11:28:55 +00:00
|
|
|
|
2016-12-12 17:47:09 +00:00
|
|
|
/* vm_page_rename() will dirty the page. */
|
2013-08-09 11:28:55 +00:00
|
|
|
if (vm_page_rename(m, new_object, idx)) {
|
|
|
|
VM_OBJECT_WUNLOCK(new_object);
|
|
|
|
VM_OBJECT_WUNLOCK(orig_object);
|
2017-11-08 02:39:37 +00:00
|
|
|
vm_radix_wait();
|
2013-08-09 11:28:55 +00:00
|
|
|
VM_OBJECT_WLOCK(orig_object);
|
|
|
|
VM_OBJECT_WLOCK(new_object);
|
|
|
|
goto retry;
|
|
|
|
}
|
2011-12-28 20:27:18 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
/*
|
|
|
|
* If some of the reservation's allocated pages remain with
|
|
|
|
* the original object, then transferring the reservation to
|
|
|
|
* the new object is neither particularly beneficial nor
|
|
|
|
* particularly harmful as compared to leaving the reservation
|
|
|
|
* with the original object. If, however, all of the
|
|
|
|
* reservation's allocated pages are transferred to the new
|
|
|
|
* object, then transferring the reservation is typically
|
|
|
|
* beneficial. Determining which of these two cases applies
|
|
|
|
* would be more costly than unconditionally renaming the
|
|
|
|
* reservation.
|
|
|
|
*/
|
|
|
|
vm_reserv_rename(m, new_object, orig_object, offidxstart);
|
|
|
|
#endif
|
2013-06-04 22:47:01 +00:00
|
|
|
if (orig_object->type == OBJT_SWAP)
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_xbusy(m);
|
2002-06-02 23:54:09 +00:00
|
|
|
}
|
|
|
|
if (orig_object->type == OBJT_SWAP) {
|
|
|
|
/*
|
2003-11-01 08:57:26 +00:00
|
|
|
* swap_pager_copy() can sleep, in which case the orig_object's
|
|
|
|
* and new_object's locks are released and reacquired.
|
2002-06-02 23:54:09 +00:00
|
|
|
*/
|
|
|
|
swap_pager_copy(orig_object, new_object, offidxstart, 0);
|
2013-06-04 22:47:01 +00:00
|
|
|
TAILQ_FOREACH(m, &new_object->memq, listq)
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_xunbusy(m);
|
2002-06-02 23:54:09 +00:00
|
|
|
}
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(orig_object);
|
|
|
|
VM_OBJECT_WUNLOCK(new_object);
|
2002-06-02 23:54:09 +00:00
|
|
|
entry->object.vm_object = new_object;
|
|
|
|
entry->offset = 0LL;
|
|
|
|
vm_object_deallocate(orig_object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(new_object);
|
2002-06-02 23:54:09 +00:00
|
|
|
}
|
|
|
|
|
1999-02-08 19:00:15 +00:00
|
|
|
#define OBSC_COLLAPSE_NOWAIT 0x0002
|
|
|
|
#define OBSC_COLLAPSE_WAIT 0x0004
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2015-12-01 09:06:09 +00:00
|
|
|
static vm_page_t
|
2015-12-03 17:21:10 +00:00
|
|
|
vm_object_collapse_scan_wait(vm_object_t object, vm_page_t p, vm_page_t next,
|
2015-12-01 09:06:09 +00:00
|
|
|
int op)
|
|
|
|
{
|
|
|
|
vm_object_t backing_object;
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
backing_object = object->backing_object;
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(backing_object);
|
|
|
|
|
|
|
|
KASSERT(p == NULL || vm_page_busied(p), ("unbusy page %p", p));
|
|
|
|
KASSERT(p == NULL || p->object == object || p->object == backing_object,
|
|
|
|
("invalid ownership %p %p %p", p, object, backing_object));
|
|
|
|
if ((op & OBSC_COLLAPSE_NOWAIT) != 0)
|
|
|
|
return (next);
|
|
|
|
if (p != NULL)
|
|
|
|
vm_page_lock(p);
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
|
|
|
VM_OBJECT_WUNLOCK(backing_object);
|
2017-11-08 02:39:37 +00:00
|
|
|
/* The page is only NULL when rename fails. */
|
2015-12-01 09:06:09 +00:00
|
|
|
if (p == NULL)
|
2017-11-08 02:39:37 +00:00
|
|
|
vm_radix_wait();
|
2015-12-01 09:06:09 +00:00
|
|
|
else
|
Fix a race in vm_page_busy_sleep(9).
Suppose that we have an exclusively busy page, and a thread which can
accept shared-busy page. In this case, typical code waiting for the
page xbusy state to pass is
again:
VM_OBJECT_WLOCK(object);
...
if (vm_page_xbusied(m)) {
vm_page_lock(m);
VM_OBJECT_WUNLOCK(object); <---1
vm_page_busy_sleep(p, "vmopax");
goto again;
}
Suppose that the xbusy state owner locked the object, unbusied the
page and unlocked the object after we are at the line [1], but before we
executed the load of the busy_lock word in vm_page_busy_sleep(). If it
happens that there is still no waiters recorded for the busy state,
the xbusy owner did not acquired the page lock, so it proceeded.
More, suppose that some other thread happen to share-busy the page
after xbusy state was relinquished but before the m->busy_lock is read
in vm_page_busy_sleep(). Again, that thread only needs vm_object lock
to proceed. Then, vm_page_busy_sleep() reads busy_lock value equal to
the VPB_SHARERS_WORD(1).
In this case, all tests in vm_page_busy_sleep(9) pass and we are going
to sleep, despite the page being share-busied.
Update check for m->busy_lock == VPB_UNBUSIED in vm_page_busy_sleep(9)
to also accept shared-busy state if we only wait for the xbusy state to
pass.
Merge sequential if()s with the same 'then' clause in
vm_page_busy_sleep().
Note that the current code does not share-busy pages from parallel
threads, the only way to have more that one sbusy owner is right now
is to recurse.
Reported and tested by: pho (previous version)
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D8196
2016-10-13 14:41:05 +00:00
|
|
|
vm_page_busy_sleep(p, "vmocol", false);
|
2015-12-01 09:06:09 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
|
|
|
VM_OBJECT_WLOCK(backing_object);
|
|
|
|
return (TAILQ_FIRST(&backing_object->memq));
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
2015-12-03 17:21:10 +00:00
|
|
|
vm_object_scan_all_shadowed(vm_object_t object)
|
1994-11-06 05:07:53 +00:00
|
|
|
{
|
1999-02-08 05:15:54 +00:00
|
|
|
vm_object_t backing_object;
|
2015-12-03 17:21:10 +00:00
|
|
|
vm_page_t p, pp;
|
2016-12-18 20:56:14 +00:00
|
|
|
vm_pindex_t backing_offset_index, new_pindex, pi, ps;
|
1994-11-06 05:07:53 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object->backing_object);
|
1995-01-05 04:30:40 +00:00
|
|
|
|
1999-02-08 19:00:15 +00:00
|
|
|
backing_object = object->backing_object;
|
1999-01-21 08:29:12 +00:00
|
|
|
|
2016-12-18 20:56:14 +00:00
|
|
|
if (backing_object->type != OBJT_DEFAULT &&
|
|
|
|
backing_object->type != OBJT_SWAP)
|
2015-12-03 17:21:10 +00:00
|
|
|
return (false);
|
|
|
|
|
2016-12-18 20:56:14 +00:00
|
|
|
pi = backing_offset_index = OFF_TO_IDX(object->backing_object_offset);
|
|
|
|
p = vm_page_find_least(backing_object, pi);
|
|
|
|
ps = swap_pager_find_least(backing_object, pi);
|
2015-12-03 17:21:10 +00:00
|
|
|
|
2016-12-18 20:56:14 +00:00
|
|
|
/*
|
|
|
|
* Only check pages inside the parent object's range and
|
|
|
|
* inside the parent object's mapping of the backing object.
|
|
|
|
*/
|
|
|
|
for (;; pi++) {
|
|
|
|
if (p != NULL && p->pindex < pi)
|
|
|
|
p = TAILQ_NEXT(p, listq);
|
|
|
|
if (ps < pi)
|
|
|
|
ps = swap_pager_find_least(backing_object, pi);
|
|
|
|
if (p == NULL && ps >= backing_object->size)
|
|
|
|
break;
|
|
|
|
else if (p == NULL)
|
|
|
|
pi = ps;
|
|
|
|
else
|
|
|
|
pi = MIN(p->pindex, ps);
|
2015-12-03 17:21:10 +00:00
|
|
|
|
2016-12-18 20:56:14 +00:00
|
|
|
new_pindex = pi - backing_offset_index;
|
|
|
|
if (new_pindex >= object->size)
|
|
|
|
break;
|
2015-12-03 17:21:10 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* See if the parent has the page or if the parent's object
|
|
|
|
* pager has the page. If the parent has the page but the page
|
|
|
|
* is not valid, the parent's object pager must have the page.
|
1999-02-08 19:00:15 +00:00
|
|
|
*
|
2015-12-03 17:21:10 +00:00
|
|
|
* If this fails, the parent does not completely shadow the
|
|
|
|
* object and we might as well give up now.
|
1999-02-08 19:00:15 +00:00
|
|
|
*/
|
2015-12-03 17:21:10 +00:00
|
|
|
pp = vm_page_lookup(object, new_pindex);
|
|
|
|
if ((pp == NULL || pp->valid == 0) &&
|
|
|
|
!vm_pager_has_page(object, new_pindex, NULL, NULL))
|
2015-12-01 09:06:09 +00:00
|
|
|
return (false);
|
1999-02-08 19:00:15 +00:00
|
|
|
}
|
2015-12-03 17:21:10 +00:00
|
|
|
return (true);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
vm_object_collapse_scan(vm_object_t object, int op)
|
|
|
|
{
|
|
|
|
vm_object_t backing_object;
|
|
|
|
vm_page_t next, p, pp;
|
|
|
|
vm_pindex_t backing_offset_index, new_pindex;
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object->backing_object);
|
|
|
|
|
|
|
|
backing_object = object->backing_object;
|
|
|
|
backing_offset_index = OFF_TO_IDX(object->backing_object_offset);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initial conditions
|
|
|
|
*/
|
|
|
|
if ((op & OBSC_COLLAPSE_WAIT) != 0)
|
1999-02-08 19:00:15 +00:00
|
|
|
vm_object_set_flag(backing_object, OBJ_DEAD);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Our scan
|
|
|
|
*/
|
2015-12-03 17:21:10 +00:00
|
|
|
for (p = TAILQ_FIRST(&backing_object->memq); p != NULL; p = next) {
|
2015-12-01 09:06:09 +00:00
|
|
|
next = TAILQ_NEXT(p, listq);
|
|
|
|
new_pindex = p->pindex - backing_offset_index;
|
1999-01-21 08:29:12 +00:00
|
|
|
|
|
|
|
/*
|
1999-02-08 19:00:15 +00:00
|
|
|
* Check for busy page
|
1999-01-21 08:29:12 +00:00
|
|
|
*/
|
2015-12-03 17:21:10 +00:00
|
|
|
if (vm_page_busied(p)) {
|
|
|
|
next = vm_object_collapse_scan_wait(object, p, next, op);
|
|
|
|
continue;
|
|
|
|
}
|
1999-02-08 19:00:15 +00:00
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
KASSERT(p->object == backing_object,
|
|
|
|
("vm_object_collapse_scan: object mismatch"));
|
2013-08-09 11:28:55 +00:00
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
if (p->pindex < backing_offset_index ||
|
|
|
|
new_pindex >= object->size) {
|
|
|
|
if (backing_object->type == OBJT_SWAP)
|
|
|
|
swap_pager_freespace(backing_object, p->pindex,
|
|
|
|
1);
|
2015-12-01 09:06:09 +00:00
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
/*
|
|
|
|
* Page is out of the parent object's range, we can
|
|
|
|
* simply destroy it.
|
|
|
|
*/
|
|
|
|
vm_page_lock(p);
|
|
|
|
KASSERT(!pmap_page_is_mapped(p),
|
|
|
|
("freeing mapped page %p", p));
|
|
|
|
if (p->wire_count == 0)
|
|
|
|
vm_page_free(p);
|
|
|
|
else
|
|
|
|
vm_page_remove(p);
|
|
|
|
vm_page_unlock(p);
|
|
|
|
continue;
|
|
|
|
}
|
1999-02-04 17:47:52 +00:00
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
pp = vm_page_lookup(object, new_pindex);
|
|
|
|
if (pp != NULL && vm_page_busied(pp)) {
|
1999-02-08 19:00:15 +00:00
|
|
|
/*
|
2015-12-03 17:21:10 +00:00
|
|
|
* The page in the parent is busy and possibly not
|
|
|
|
* (yet) valid. Until its state is finalized by the
|
|
|
|
* busy bit owner, we can't tell whether it shadows the
|
|
|
|
* original page. Therefore, we must either skip it
|
|
|
|
* and the original (backing_object) page or wait for
|
|
|
|
* its state to be finalized.
|
1999-02-24 21:26:26 +00:00
|
|
|
*
|
2015-12-03 17:21:10 +00:00
|
|
|
* This is due to a race with vm_fault() where we must
|
|
|
|
* unbusy the original (backing_obj) page before we can
|
|
|
|
* (re)lock the parent. Hence we can get here.
|
1999-02-08 19:00:15 +00:00
|
|
|
*/
|
2015-12-03 17:21:10 +00:00
|
|
|
next = vm_object_collapse_scan_wait(object, pp, next,
|
|
|
|
op);
|
|
|
|
continue;
|
|
|
|
}
|
2014-02-14 03:34:12 +00:00
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
KASSERT(pp == NULL || pp->valid != 0,
|
|
|
|
("unbusy invalid page %p", pp));
|
2013-08-09 11:28:55 +00:00
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
if (pp != NULL || vm_pager_has_page(object, new_pindex, NULL,
|
|
|
|
NULL)) {
|
2013-08-09 11:28:55 +00:00
|
|
|
/*
|
2015-12-03 17:21:10 +00:00
|
|
|
* The page already exists in the parent OR swap exists
|
|
|
|
* for this location in the parent. Leave the parent's
|
|
|
|
* page alone. Destroy the original page from the
|
|
|
|
* backing object.
|
2013-08-09 11:28:55 +00:00
|
|
|
*/
|
2015-12-03 17:21:10 +00:00
|
|
|
if (backing_object->type == OBJT_SWAP)
|
|
|
|
swap_pager_freespace(backing_object, p->pindex,
|
|
|
|
1);
|
|
|
|
vm_page_lock(p);
|
|
|
|
KASSERT(!pmap_page_is_mapped(p),
|
|
|
|
("freeing mapped page %p", p));
|
|
|
|
if (p->wire_count == 0)
|
|
|
|
vm_page_free(p);
|
|
|
|
else
|
|
|
|
vm_page_remove(p);
|
|
|
|
vm_page_unlock(p);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Page does not exist in parent, rename the page from the
|
|
|
|
* backing object to the main object.
|
|
|
|
*
|
|
|
|
* If the page was mapped to a process, it can remain mapped
|
2016-12-12 17:47:09 +00:00
|
|
|
* through the rename. vm_page_rename() will dirty the page.
|
2015-12-03 17:21:10 +00:00
|
|
|
*/
|
|
|
|
if (vm_page_rename(p, object, new_pindex)) {
|
|
|
|
next = vm_object_collapse_scan_wait(object, NULL, next,
|
|
|
|
op);
|
|
|
|
continue;
|
1999-02-04 17:47:52 +00:00
|
|
|
}
|
2015-12-03 17:21:10 +00:00
|
|
|
|
|
|
|
/* Use the old pindex to free the right page. */
|
|
|
|
if (backing_object->type == OBJT_SWAP)
|
|
|
|
swap_pager_freespace(backing_object,
|
|
|
|
new_pindex + backing_offset_index, 1);
|
|
|
|
|
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
/*
|
|
|
|
* Rename the reservation.
|
|
|
|
*/
|
|
|
|
vm_reserv_rename(p, object, backing_object,
|
|
|
|
backing_offset_index);
|
|
|
|
#endif
|
1994-11-06 05:07:53 +00:00
|
|
|
}
|
2015-12-01 09:06:09 +00:00
|
|
|
return (true);
|
1999-02-08 19:00:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* this version of collapse allows the operation to occur earlier and
|
|
|
|
* when paging_in_progress is true for an object... This is not a complete
|
|
|
|
* operation, but should plug 99.9% of the rest of the leaks.
|
|
|
|
*/
|
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_qcollapse(vm_object_t object)
|
1999-02-08 19:00:15 +00:00
|
|
|
{
|
|
|
|
vm_object_t backing_object = object->backing_object;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
|
|
|
VM_OBJECT_ASSERT_WLOCKED(backing_object);
|
2001-07-04 20:15:18 +00:00
|
|
|
|
1999-02-08 19:00:15 +00:00
|
|
|
if (backing_object->ref_count != 1)
|
|
|
|
return;
|
|
|
|
|
2015-12-03 17:21:10 +00:00
|
|
|
vm_object_collapse_scan(object, OBSC_COLLAPSE_NOWAIT);
|
1994-11-06 05:07:53 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_object_collapse:
|
|
|
|
*
|
|
|
|
* Collapse an object with the object backing it.
|
|
|
|
* Pages in the backing object are moved into the
|
|
|
|
* parent, and the backing object is deallocated.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_collapse(vm_object_t object)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2016-05-26 16:51:38 +00:00
|
|
|
vm_object_t backing_object, new_backing_object;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
1999-02-08 19:00:15 +00:00
|
|
|
|
2016-05-26 16:51:38 +00:00
|
|
|
while (TRUE) {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Verify that the conditions are right for collapse:
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
1999-02-08 19:00:15 +00:00
|
|
|
* The object exists and the backing object exists.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
if ((backing_object = object->backing_object) == NULL)
|
1999-02-08 19:00:15 +00:00
|
|
|
break;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
1995-03-01 23:30:04 +00:00
|
|
|
/*
|
|
|
|
* we check the backing object first, because it is most likely
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
* not collapsable.
|
1995-03-01 23:30:04 +00:00
|
|
|
*/
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(backing_object);
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
if (backing_object->handle != NULL ||
|
|
|
|
(backing_object->type != OBJT_DEFAULT &&
|
|
|
|
backing_object->type != OBJT_SWAP) ||
|
1995-03-01 23:30:04 +00:00
|
|
|
(backing_object->flags & OBJ_DEAD) ||
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
object->handle != NULL ||
|
|
|
|
(object->type != OBJT_DEFAULT &&
|
|
|
|
object->type != OBJT_SWAP) ||
|
|
|
|
(object->flags & OBJ_DEAD)) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(backing_object);
|
1999-02-08 19:00:15 +00:00
|
|
|
break;
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
}
|
1995-02-18 06:48:33 +00:00
|
|
|
|
2016-05-26 16:51:38 +00:00
|
|
|
if (object->paging_in_progress != 0 ||
|
|
|
|
backing_object->paging_in_progress != 0) {
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
vm_object_qcollapse(object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(backing_object);
|
1999-02-08 19:00:15 +00:00
|
|
|
break;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2016-05-26 16:51:38 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* We know that we can either collapse the backing object (if
|
1999-02-08 19:00:15 +00:00
|
|
|
* the parent is the only reference to it) or (perhaps) have
|
|
|
|
* the parent bypass the object if the parent happens to shadow
|
|
|
|
* all the resident pages in the entire backing object.
|
|
|
|
*
|
|
|
|
* This is ignoring pager-backed pages such as swap pages.
|
2015-12-03 17:21:10 +00:00
|
|
|
* vm_object_collapse_scan fails the shadowing test in this
|
1999-02-08 19:00:15 +00:00
|
|
|
* case.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (backing_object->ref_count == 1) {
|
2016-05-26 16:59:29 +00:00
|
|
|
vm_object_pip_add(object, 1);
|
|
|
|
vm_object_pip_add(backing_object, 1);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-02-08 19:00:15 +00:00
|
|
|
* If there is exactly one reference to the backing
|
2015-12-03 17:21:10 +00:00
|
|
|
* object, we can collapse it into the parent.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2015-12-03 17:21:10 +00:00
|
|
|
vm_object_collapse_scan(object, OBSC_COLLAPSE_WAIT);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2007-12-29 19:53:04 +00:00
|
|
|
#if VM_NRESERVLEVEL > 0
|
|
|
|
/*
|
|
|
|
* Break any reservations from backing_object.
|
|
|
|
*/
|
|
|
|
if (__predict_false(!LIST_EMPTY(&backing_object->rvq)))
|
|
|
|
vm_reserv_break_all(backing_object);
|
|
|
|
#endif
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Move the pager from backing_object to object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
if (backing_object->type == OBJT_SWAP) {
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
2003-11-01 08:57:26 +00:00
|
|
|
* swap_pager_copy() can sleep, in which case
|
|
|
|
* the backing_object's and object's locks are
|
|
|
|
* released and reacquired.
|
2012-07-11 01:04:59 +00:00
|
|
|
* Since swap_pager_copy() is being asked to
|
|
|
|
* destroy the source, it will change the
|
|
|
|
* backing_object's type to OBJT_DEFAULT.
|
1999-01-21 08:29:12 +00:00
|
|
|
*/
|
|
|
|
swap_pager_copy(
|
|
|
|
backing_object,
|
|
|
|
object,
|
|
|
|
OFF_TO_IDX(object->backing_object_offset), TRUE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Object now shadows whatever backing_object did.
|
1999-02-08 19:00:15 +00:00
|
|
|
* Note that the reference to
|
|
|
|
* backing_object->backing_object moves from within
|
|
|
|
* backing_object to within object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_REMOVE(object, shadow_list);
|
2003-05-02 03:00:21 +00:00
|
|
|
backing_object->shadow_count--;
|
1996-03-02 02:54:24 +00:00
|
|
|
if (backing_object->backing_object) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(backing_object->backing_object);
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_REMOVE(backing_object, shadow_list);
|
2003-10-26 06:29:26 +00:00
|
|
|
LIST_INSERT_HEAD(
|
|
|
|
&backing_object->backing_object->shadow_head,
|
|
|
|
object, shadow_list);
|
|
|
|
/*
|
|
|
|
* The shadow_count has not changed.
|
|
|
|
*/
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(backing_object->backing_object);
|
1996-03-02 02:54:24 +00:00
|
|
|
}
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
object->backing_object = backing_object->backing_object;
|
1999-02-08 19:00:15 +00:00
|
|
|
object->backing_object_offset +=
|
|
|
|
backing_object->backing_object_offset;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Discard backing_object.
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Since the backing object has no pages, no pager left,
|
|
|
|
* and no object references within it, all that is
|
|
|
|
* necessary is to dispose of it.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2009-06-28 08:42:17 +00:00
|
|
|
KASSERT(backing_object->ref_count == 1, (
|
|
|
|
"backing_object %p was somehow re-referenced during collapse!",
|
|
|
|
backing_object));
|
2016-05-26 16:59:29 +00:00
|
|
|
vm_object_pip_wakeup(backing_object);
|
2015-05-08 19:43:37 +00:00
|
|
|
backing_object->type = OBJT_DEAD;
|
|
|
|
backing_object->ref_count = 0;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(backing_object);
|
2009-06-28 08:42:17 +00:00
|
|
|
vm_object_destroy(backing_object);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2016-05-26 16:59:29 +00:00
|
|
|
vm_object_pip_wakeup(object);
|
2017-12-25 19:36:04 +00:00
|
|
|
counter_u64_add(object_collapses, 1);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-02-08 19:00:15 +00:00
|
|
|
* If we do not entirely shadow the backing object,
|
|
|
|
* there is nothing we can do so we give up.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2006-01-25 08:42:58 +00:00
|
|
|
if (object->resident_page_count != object->size &&
|
2015-12-03 17:21:10 +00:00
|
|
|
!vm_object_scan_all_shadowed(object)) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(backing_object);
|
1999-02-08 19:00:15 +00:00
|
|
|
break;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make the parent shadow the next object in the
|
|
|
|
* chain. Deallocating backing_object will not remove
|
|
|
|
* it, since its reference count is at least 2.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_REMOVE(object, shadow_list);
|
1998-01-31 11:56:53 +00:00
|
|
|
backing_object->shadow_count--;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
|
|
|
new_backing_object = backing_object->backing_object;
|
1999-01-28 00:57:57 +00:00
|
|
|
if ((object->backing_object = new_backing_object) != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(new_backing_object);
|
2003-05-18 04:10:16 +00:00
|
|
|
LIST_INSERT_HEAD(
|
1999-02-08 19:00:15 +00:00
|
|
|
&new_backing_object->shadow_head,
|
|
|
|
object,
|
|
|
|
shadow_list
|
|
|
|
);
|
1998-01-31 11:56:53 +00:00
|
|
|
new_backing_object->shadow_count++;
|
2003-11-02 21:30:10 +00:00
|
|
|
vm_object_reference_locked(new_backing_object);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(new_backing_object);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
object->backing_object_offset +=
|
|
|
|
backing_object->backing_object_offset;
|
1996-03-02 02:54:24 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
|
|
|
* Drop the reference count on backing_object. Since
|
2003-11-01 23:06:41 +00:00
|
|
|
* its ref_count was at least 2, it will not vanish.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2003-11-01 23:06:41 +00:00
|
|
|
backing_object->ref_count--;
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(backing_object);
|
2017-12-25 19:36:04 +00:00
|
|
|
counter_u64_add(object_bypasses, 1);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Try again with this object's new backing object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2003-05-03 08:09:24 +00:00
|
|
|
* vm_object_page_remove:
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
2008-02-26 17:16:48 +00:00
|
|
|
* For the given object, either frees or invalidates each of the
|
2011-06-29 16:40:41 +00:00
|
|
|
* specified pages. In general, a page is freed. However, if a page is
|
|
|
|
* wired for any reason other than the existence of a managed, wired
|
|
|
|
* mapping, then it may be invalidated but not removed from the object.
|
|
|
|
* Pages are specified by the given range ["start", "end") and the option
|
|
|
|
* OBJPR_CLEANONLY. As a special case, if "end" is zero, then the range
|
|
|
|
* extends from "start" to the end of the object. If the option
|
|
|
|
* OBJPR_CLEANONLY is specified, then only the non-dirty pages within the
|
|
|
|
* specified range are affected. If the option OBJPR_NOTMAPPED is
|
|
|
|
* specified, then the pages within the specified range must have no
|
|
|
|
* mappings. Otherwise, if this option is not specified, any mappings to
|
|
|
|
* the specified pages are removed before the pages are freed or
|
|
|
|
* invalidated.
|
2008-02-26 17:16:48 +00:00
|
|
|
*
|
2011-06-29 16:40:41 +00:00
|
|
|
* In general, this operation should only be performed on objects that
|
|
|
|
* contain managed pages. There are, however, two exceptions. First, it
|
|
|
|
* is performed on the kernel and kmem objects by vm_map_entry_delete().
|
|
|
|
* Second, it is used by msync(..., MS_INVALIDATE) to invalidate device-
|
|
|
|
* backed pages. In both of these cases, the option OBJPR_CLEANONLY must
|
|
|
|
* not be specified and the option OBJPR_NOTMAPPED must be specified.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2003-04-26 23:41:30 +00:00
|
|
|
vm_object_page_remove(vm_object_t object, vm_pindex_t start, vm_pindex_t end,
|
2011-06-29 16:40:41 +00:00
|
|
|
int options)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1999-02-08 05:15:54 +00:00
|
|
|
vm_page_t p, next;
|
2017-09-09 17:35:19 +00:00
|
|
|
struct mtx *mtx;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
KASSERT((object->flags & OBJ_UNMANAGED) == 0 ||
|
2011-06-29 16:40:41 +00:00
|
|
|
(options & (OBJPR_CLEANONLY | OBJPR_NOTMAPPED)) == OBJPR_NOTMAPPED,
|
|
|
|
("vm_object_page_remove: illegal options for object %p", object));
|
2003-04-26 23:41:30 +00:00
|
|
|
if (object->resident_page_count == 0)
|
2016-11-15 18:22:50 +00:00
|
|
|
return;
|
1998-08-06 08:33:19 +00:00
|
|
|
vm_object_pip_add(object, 1);
|
1994-05-25 09:21:21 +00:00
|
|
|
again:
|
2010-07-04 11:13:33 +00:00
|
|
|
p = vm_page_find_least(object, start);
|
2017-09-09 17:35:19 +00:00
|
|
|
mtx = NULL;
|
2010-04-30 00:46:43 +00:00
|
|
|
|
2003-01-27 01:12:35 +00:00
|
|
|
/*
|
2011-06-29 16:40:41 +00:00
|
|
|
* Here, the variable "p" is either (1) the page with the least pindex
|
|
|
|
* greater than or equal to the parameter "start" or (2) NULL.
|
2003-01-27 01:12:35 +00:00
|
|
|
*/
|
2011-06-29 16:40:41 +00:00
|
|
|
for (; p != NULL && (p->pindex < end || end == 0); p = next) {
|
2003-01-27 01:12:35 +00:00
|
|
|
next = TAILQ_NEXT(p, listq);
|
|
|
|
|
Prevent the leakage of wired pages in the following circumstances:
First, a file is mmap(2)ed and then mlock(2)ed. Later, it is truncated.
Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the
pages beyond the EOF are unmapped and freed. However, when the file is
mlock(2)ed, the pages beyond the EOF are unmapped but not freed because
they have a non-zero wire count. This can be a mistake. Specifically,
it is a mistake if the sole reason why the pages are wired is because of
wired, managed mappings. Previously, unmapping the pages destroys these
wired, managed mappings, but does not reduce the pages' wire count.
Consequently, when the file is unmapped, the pages are not unwired
because the wired mapping has been destroyed. Moreover, when the vm
object is finally destroyed, the pages are leaked because they are still
wired. The fix is to reduce the pages' wired count by the number of
wired, managed mappings destroyed. To do this, I introduce a new pmap
function pmap_page_wired_mappings() that returns the number of managed
mappings to the given physical page that are wired, and I use this
function in vm_object_page_remove().
Reviewed by: tegge
MFC after: 6 weeks
2007-11-17 22:52:29 +00:00
|
|
|
/*
|
2011-06-29 16:40:41 +00:00
|
|
|
* If the page is wired for any reason besides the existence
|
|
|
|
* of managed, wired mappings, then it cannot be freed. For
|
|
|
|
* example, fictitious pages, which represent device memory,
|
|
|
|
* are inherently wired and cannot be freed. They can,
|
|
|
|
* however, be invalidated if the option OBJPR_CLEANONLY is
|
|
|
|
* not specified.
|
Prevent the leakage of wired pages in the following circumstances:
First, a file is mmap(2)ed and then mlock(2)ed. Later, it is truncated.
Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the
pages beyond the EOF are unmapped and freed. However, when the file is
mlock(2)ed, the pages beyond the EOF are unmapped but not freed because
they have a non-zero wire count. This can be a mistake. Specifically,
it is a mistake if the sole reason why the pages are wired is because of
wired, managed mappings. Previously, unmapping the pages destroys these
wired, managed mappings, but does not reduce the pages' wire count.
Consequently, when the file is unmapped, the pages are not unwired
because the wired mapping has been destroyed. Moreover, when the vm
object is finally destroyed, the pages are leaked because they are still
wired. The fix is to reduce the pages' wired count by the number of
wired, managed mappings destroyed. To do this, I introduce a new pmap
function pmap_page_wired_mappings() that returns the number of managed
mappings to the given physical page that are wired, and I use this
function in vm_object_page_remove().
Reviewed by: tegge
MFC after: 6 weeks
2007-11-17 22:52:29 +00:00
|
|
|
*/
|
2017-09-09 17:35:19 +00:00
|
|
|
vm_page_change_lock(p, &mtx);
|
2013-09-08 17:51:22 +00:00
|
|
|
if (vm_page_xbusied(p)) {
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
Fix a race in vm_page_busy_sleep(9).
Suppose that we have an exclusively busy page, and a thread which can
accept shared-busy page. In this case, typical code waiting for the
page xbusy state to pass is
again:
VM_OBJECT_WLOCK(object);
...
if (vm_page_xbusied(m)) {
vm_page_lock(m);
VM_OBJECT_WUNLOCK(object); <---1
vm_page_busy_sleep(p, "vmopax");
goto again;
}
Suppose that the xbusy state owner locked the object, unbusied the
page and unlocked the object after we are at the line [1], but before we
executed the load of the busy_lock word in vm_page_busy_sleep(). If it
happens that there is still no waiters recorded for the busy state,
the xbusy owner did not acquired the page lock, so it proceeded.
More, suppose that some other thread happen to share-busy the page
after xbusy state was relinquished but before the m->busy_lock is read
in vm_page_busy_sleep(). Again, that thread only needs vm_object lock
to proceed. Then, vm_page_busy_sleep() reads busy_lock value equal to
the VPB_SHARERS_WORD(1).
In this case, all tests in vm_page_busy_sleep(9) pass and we are going
to sleep, despite the page being share-busied.
Update check for m->busy_lock == VPB_UNBUSIED in vm_page_busy_sleep(9)
to also accept shared-busy state if we only wait for the xbusy state to
pass.
Merge sequential if()s with the same 'then' clause in
vm_page_busy_sleep().
Note that the current code does not share-busy pages from parallel
threads, the only way to have more that one sbusy owner is right now
is to recurse.
Reported and tested by: pho (previous version)
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D8196
2016-10-13 14:41:05 +00:00
|
|
|
vm_page_busy_sleep(p, "vmopax", true);
|
2013-09-08 17:51:22 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
|
|
|
goto again;
|
|
|
|
}
|
Revert r173708's modifications to vm_object_page_remove().
Assume that a vnode is mapped shared and mlocked(), and then the vnode
is truncated, or truncated and then again extended past the mapping
point EOF. Truncation removes the pages past the truncation point,
and if pages are later created at this range, they are not properly
mapped into the mlocked region, and their wiring count is wrong.
The revert leaves the invalidated but wired pages on the object queue,
which means that the pages are found by vm_object_unwire() when the
mapped range is munlock()ed, and reused by the buffer cache when the
vnode is extended again.
The changes in r173708 were required since then vm_map_unwire() looked
at the page tables to find the page to unwire. This is no longer
needed with the vm_object_unwire() introduction, which follows the
objects shadow chain.
Also eliminate OBJPR_NOTWIRED flag for vm_object_page_remove(), which
is now redundand, we do not remove wired pages.
Reported by: trasz, Dmitry Sivachenko <trtrmitya@gmail.com>
Suggested and reviewed by: alc
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
2015-07-25 18:29:06 +00:00
|
|
|
if (p->wire_count != 0) {
|
2017-09-28 17:55:41 +00:00
|
|
|
if ((options & OBJPR_NOTMAPPED) == 0 &&
|
|
|
|
object->ref_count != 0)
|
2008-02-26 17:16:48 +00:00
|
|
|
pmap_remove_all(p);
|
2011-06-29 16:40:41 +00:00
|
|
|
if ((options & OBJPR_CLEANONLY) == 0) {
|
2003-01-27 01:12:35 +00:00
|
|
|
p->valid = 0;
|
2009-05-28 07:26:36 +00:00
|
|
|
vm_page_undirty(p);
|
|
|
|
}
|
2017-09-09 17:35:19 +00:00
|
|
|
continue;
|
2003-01-27 01:12:35 +00:00
|
|
|
}
|
2013-08-09 11:11:11 +00:00
|
|
|
if (vm_page_busied(p)) {
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
Fix a race in vm_page_busy_sleep(9).
Suppose that we have an exclusively busy page, and a thread which can
accept shared-busy page. In this case, typical code waiting for the
page xbusy state to pass is
again:
VM_OBJECT_WLOCK(object);
...
if (vm_page_xbusied(m)) {
vm_page_lock(m);
VM_OBJECT_WUNLOCK(object); <---1
vm_page_busy_sleep(p, "vmopax");
goto again;
}
Suppose that the xbusy state owner locked the object, unbusied the
page and unlocked the object after we are at the line [1], but before we
executed the load of the busy_lock word in vm_page_busy_sleep(). If it
happens that there is still no waiters recorded for the busy state,
the xbusy owner did not acquired the page lock, so it proceeded.
More, suppose that some other thread happen to share-busy the page
after xbusy state was relinquished but before the m->busy_lock is read
in vm_page_busy_sleep(). Again, that thread only needs vm_object lock
to proceed. Then, vm_page_busy_sleep() reads busy_lock value equal to
the VPB_SHARERS_WORD(1).
In this case, all tests in vm_page_busy_sleep(9) pass and we are going
to sleep, despite the page being share-busied.
Update check for m->busy_lock == VPB_UNBUSIED in vm_page_busy_sleep(9)
to also accept shared-busy state if we only wait for the xbusy state to
pass.
Merge sequential if()s with the same 'then' clause in
vm_page_busy_sleep().
Note that the current code does not share-busy pages from parallel
threads, the only way to have more that one sbusy owner is right now
is to recurse.
Reported and tested by: pho (previous version)
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D8196
2016-10-13 14:41:05 +00:00
|
|
|
vm_page_busy_sleep(p, "vmopar", false);
|
2013-08-09 11:11:11 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2003-01-27 01:12:35 +00:00
|
|
|
goto again;
|
2013-08-09 11:11:11 +00:00
|
|
|
}
|
2008-02-26 17:16:48 +00:00
|
|
|
KASSERT((p->flags & PG_FICTITIOUS) == 0,
|
|
|
|
("vm_object_page_remove: page %p is fictitious", p));
|
2011-06-29 16:40:41 +00:00
|
|
|
if ((options & OBJPR_CLEANONLY) != 0 && p->valid != 0) {
|
2017-09-28 17:55:41 +00:00
|
|
|
if ((options & OBJPR_NOTMAPPED) == 0 &&
|
|
|
|
object->ref_count != 0)
|
2011-06-29 16:40:41 +00:00
|
|
|
pmap_remove_write(p);
|
2017-09-28 17:55:41 +00:00
|
|
|
if (p->dirty != 0)
|
2017-09-09 17:35:19 +00:00
|
|
|
continue;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2017-09-28 17:55:41 +00:00
|
|
|
if ((options & OBJPR_NOTMAPPED) == 0 && object->ref_count != 0)
|
2011-06-29 16:40:41 +00:00
|
|
|
pmap_remove_all(p);
|
2018-04-24 21:15:54 +00:00
|
|
|
vm_page_free(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2017-09-09 17:35:19 +00:00
|
|
|
if (mtx != NULL)
|
|
|
|
mtx_unlock(mtx);
|
1995-03-01 23:30:04 +00:00
|
|
|
vm_object_pip_wakeup(object);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2011-11-04 04:02:50 +00:00
|
|
|
/*
|
2015-09-30 23:06:29 +00:00
|
|
|
* vm_object_page_noreuse:
|
2011-11-04 04:02:50 +00:00
|
|
|
*
|
2015-09-30 23:06:29 +00:00
|
|
|
* For the given object, attempt to move the specified pages to
|
|
|
|
* the head of the inactive queue. This bypasses regular LRU
|
|
|
|
* operation and allows the pages to be reused quickly under memory
|
|
|
|
* pressure. If a page is wired for any reason, then it will not
|
|
|
|
* be queued. Pages are specified by the range ["start", "end").
|
|
|
|
* As a special case, if "end" is zero, then the range extends from
|
|
|
|
* "start" to the end of the object.
|
2011-11-04 04:02:50 +00:00
|
|
|
*
|
|
|
|
* This operation should only be performed on objects that
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
* contain non-fictitious, managed pages.
|
2011-11-04 04:02:50 +00:00
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
void
|
2015-09-30 23:06:29 +00:00
|
|
|
vm_object_page_noreuse(vm_object_t object, vm_pindex_t start, vm_pindex_t end)
|
2011-11-04 04:02:50 +00:00
|
|
|
{
|
2017-09-09 17:35:19 +00:00
|
|
|
struct mtx *mtx;
|
2011-11-04 04:02:50 +00:00
|
|
|
vm_page_t p, next;
|
|
|
|
|
2017-03-15 17:43:45 +00:00
|
|
|
VM_OBJECT_ASSERT_LOCKED(object);
|
In the past four years, we've added two new vm object types. Each time,
similar changes had to be made in various places throughout the machine-
independent virtual memory layer to support the new vm object type.
However, in most of these places, it's actually not the type of the vm
object that matters to us but instead certain attributes of its pages.
For example, OBJT_DEVICE, OBJT_MGTDEVICE, and OBJT_SG objects contain
fictitious pages. In other words, in most of these places, we were
testing the vm object's type to determine if it contained fictitious (or
unmanaged) pages.
To both simplify the code in these places and make the addition of future
vm object types easier, this change introduces two new vm object flags
that describe attributes of the vm object's pages, specifically, whether
they are fictitious or unmanaged.
Reviewed and tested by: kib
2012-12-09 00:32:38 +00:00
|
|
|
KASSERT((object->flags & (OBJ_FICTITIOUS | OBJ_UNMANAGED)) == 0,
|
2015-09-30 23:06:29 +00:00
|
|
|
("vm_object_page_noreuse: illegal object %p", object));
|
2011-11-04 04:02:50 +00:00
|
|
|
if (object->resident_page_count == 0)
|
|
|
|
return;
|
|
|
|
p = vm_page_find_least(object, start);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Here, the variable "p" is either (1) the page with the least pindex
|
|
|
|
* greater than or equal to the parameter "start" or (2) NULL.
|
|
|
|
*/
|
|
|
|
mtx = NULL;
|
|
|
|
for (; p != NULL && (p->pindex < end || end == 0); p = next) {
|
|
|
|
next = TAILQ_NEXT(p, listq);
|
2017-09-09 17:35:19 +00:00
|
|
|
vm_page_change_lock(p, &mtx);
|
2015-09-30 23:06:29 +00:00
|
|
|
vm_page_deactivate_noreuse(p);
|
2011-11-04 04:02:50 +00:00
|
|
|
}
|
|
|
|
if (mtx != NULL)
|
|
|
|
mtx_unlock(mtx);
|
|
|
|
}
|
|
|
|
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
/*
|
|
|
|
* Populate the specified range of the object with valid pages. Returns
|
|
|
|
* TRUE if the range is successfully populated and FALSE otherwise.
|
|
|
|
*
|
|
|
|
* Note: This function should be optimized to pass a larger array of
|
|
|
|
* pages to vm_pager_get_pages() before it is applied to a non-
|
|
|
|
* OBJT_DEVICE object.
|
|
|
|
*
|
|
|
|
* The object must be locked.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
vm_object_populate(vm_object_t object, vm_pindex_t start, vm_pindex_t end)
|
|
|
|
{
|
2015-06-12 11:32:20 +00:00
|
|
|
vm_page_t m;
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
vm_pindex_t pindex;
|
|
|
|
int rv;
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
for (pindex = start; pindex < end; pindex++) {
|
2013-08-22 07:39:53 +00:00
|
|
|
m = vm_page_grab(object, pindex, VM_ALLOC_NORMAL);
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
if (m->valid != VM_PAGE_BITS_ALL) {
|
A change to KPI of vm_pager_get_pages() and underlying VOP_GETPAGES().
o With new KPI consumers can request contiguous ranges of pages, and
unlike before, all pages will be kept busied on return, like it was
done before with the 'reqpage' only. Now the reqpage goes away. With
new interface it is easier to implement code protected from race
conditions.
Such arrayed requests for now should be preceeded by a call to
vm_pager_haspage() to make sure that request is possible. This
could be improved later, making vm_pager_haspage() obsolete.
Strenghtening the promises on the business of the array of pages
allows us to remove such hacks as swp_pager_free_nrpage() and
vm_pager_free_nonreq().
o New KPI accepts two integer pointers that may optionally point at
values for read ahead and read behind, that a pager may do, if it
can. These pages are completely owned by pager, and not controlled
by the caller.
This shifts the UFS-specific readahead logic from vm_fault.c, which
should be file system agnostic, into vnode_pager.c. It also removes
one VOP_BMAP() request per hard fault.
Discussed with: kib, alc, jeff, scottl
Sponsored by: Nginx, Inc.
Sponsored by: Netflix
2015-12-16 21:30:45 +00:00
|
|
|
rv = vm_pager_get_pages(object, &m, 1, NULL, NULL);
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
if (rv != VM_PAGER_OK) {
|
2010-04-30 00:46:43 +00:00
|
|
|
vm_page_lock(m);
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
vm_page_free(m);
|
2010-04-30 00:46:43 +00:00
|
|
|
vm_page_unlock(m);
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Keep "m" busy because a subsequent iteration may unlock
|
|
|
|
* the object.
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
if (pindex > start) {
|
|
|
|
m = vm_page_lookup(object, start);
|
|
|
|
while (m != NULL && m->pindex < pindex) {
|
2013-08-09 11:11:11 +00:00
|
|
|
vm_page_xunbusy(m);
|
Long, long ago in r27464 special case code for mapping device-backed
memory with 4MB pages was added to pmap_object_init_pt(). This code
assumes that the pages of a OBJT_DEVICE object are always physically
contiguous. Unfortunately, this is not always the case. For example,
jhb@ informs me that the recently introduced /dev/ksyms driver creates
a OBJT_DEVICE object that violates this assumption. Thus, this
revision modifies pmap_object_init_pt() to abort the mapping if the
OBJT_DEVICE object's pages are not physically contiguous. This
revision also changes some inconsistent if not buggy behavior. For
example, the i386 version aborts if the first 4MB virtual page that
would be mapped is already valid. However, it incorrectly replaces
any subsequent 4MB virtual page mappings that it encounters,
potentially leaking a page table page. The amd64 version has a bug of
my own creation. It potentially busies the wrong page and always an
insufficent number of pages if it blocks allocating a page table page.
To my knowledge, there have been no reports of these bugs, hence,
their persistance. I suspect that the existing restrictions that
pmap_object_init_pt() placed on the OBJT_DEVICE objects that it would
choose to map, for example, that the first page must be aligned on a 2
or 4MB physical boundary and that the size of the mapping must be a
multiple of the large page size, were enough to avoid triggering the
bug for drivers like ksyms. However, one side effect of testing the
OBJT_DEVICE object's pages for physical contiguity is that a dubious
difference between pmap_object_init_pt() and the standard path for
mapping devices pages, i.e., vm_fault(), has been eliminated.
Previously, pmap_object_init_pt() would only instantiate the first
PG_FICTITOUS page being mapped because it never examined the rest.
Now, however, pmap_object_init_pt() uses the new function
vm_object_populate() to instantiate them all (in order to support
testing their physical contiguity). These pages need to be
instantiated for the mechanism that I have prototyped for
automatically maintaining the consistency of the PAT settings across
multiple mappings, particularly, amd64's direct mapping, to work.
(Translation: This change is also being made to support jhb@'s work on
the Nvidia feature requests.)
Discussed with: jhb@
2009-06-14 19:51:43 +00:00
|
|
|
m = TAILQ_NEXT(m, listq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return (pindex == end);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Routine: vm_object_coalesce
|
|
|
|
* Function: Coalesces two objects backing up adjoining
|
|
|
|
* regions of memory into a single object.
|
|
|
|
*
|
|
|
|
* returns TRUE if objects were combined.
|
|
|
|
*
|
|
|
|
* NOTE: Only works at the moment if the second object is NULL -
|
|
|
|
* if it's not, which object do we lock first?
|
|
|
|
*
|
|
|
|
* Parameters:
|
|
|
|
* prev_object First object to coalesce
|
|
|
|
* prev_offset Offset into prev_object
|
|
|
|
* prev_size Size of reference to prev_object
|
2004-07-25 07:48:47 +00:00
|
|
|
* next_size Size of reference to the second object
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
* reserved Indicator that extension region has
|
|
|
|
* swap accounted for
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
* Conditions:
|
|
|
|
* The object must *not* be locked.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
boolean_t
|
2004-07-25 07:48:47 +00:00
|
|
|
vm_object_coalesce(vm_object_t prev_object, vm_ooffset_t prev_offset,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_size_t prev_size, vm_size_t next_size, boolean_t reserved)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1999-05-16 05:07:34 +00:00
|
|
|
vm_pindex_t next_pindex;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-06-19 06:02:03 +00:00
|
|
|
if (prev_object == NULL)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (TRUE);
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WLOCK(prev_object);
|
2013-11-05 06:18:50 +00:00
|
|
|
if ((prev_object->type != OBJT_DEFAULT &&
|
|
|
|
prev_object->type != OBJT_SWAP) ||
|
The OBJ_TMPFS flag of vm_object means that there is unreclaimed tmpfs
vnode for the tmpfs node owning this object. The flag is currently
used for two purposes. First, it allows to correctly handle VV_TEXT
for tmpfs vnode when the ref count on the object is decremented to 1,
similar to vnode_pager_dealloc() for regular filesystems. Second, it
prevents some operations, which are done on OBJT_SWAP vm objects
backing user anonymous memory, but are incorrect for the object owned
by tmpfs node.
The second kind of use of the OBJ_TMPFS flag is incorrect, since the
vnode might be reclaimed, which clears the flag, but vm object
operations must still be disallowed.
Introduce one more flag, OBJ_TMPFS_NODE, which is permanently set on
the object for VREG tmpfs node, and used instead of OBJ_TMPFS to test
whether vm object collapse and similar actions should be disabled.
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2014-07-14 09:30:37 +00:00
|
|
|
(prev_object->flags & OBJ_TMPFS_NODE) != 0) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(prev_object);
|
1996-03-28 04:53:28 +00:00
|
|
|
return (FALSE);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Try to collapse the object first
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_object_collapse(prev_object);
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Can't coalesce if: . more than one reference . paged out . shadows
|
|
|
|
* another object . has a copy elsewhere (any of which mean that the
|
|
|
|
* pages not mapped to prev_entry may be in use anyway)
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-12-31 16:23:38 +00:00
|
|
|
if (prev_object->backing_object != NULL) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(prev_object);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1995-12-11 04:58:34 +00:00
|
|
|
|
|
|
|
prev_size >>= PAGE_SHIFT;
|
|
|
|
next_size >>= PAGE_SHIFT;
|
2004-07-25 07:48:47 +00:00
|
|
|
next_pindex = OFF_TO_IDX(prev_offset) + prev_size;
|
1996-12-31 16:23:38 +00:00
|
|
|
|
|
|
|
if ((prev_object->ref_count > 1) &&
|
1999-05-16 05:07:34 +00:00
|
|
|
(prev_object->size != next_pindex)) {
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(prev_object);
|
1996-12-31 16:23:38 +00:00
|
|
|
return (FALSE);
|
|
|
|
}
|
|
|
|
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
/*
|
|
|
|
* Account for the charge.
|
|
|
|
*/
|
2010-12-02 17:37:16 +00:00
|
|
|
if (prev_object->cred != NULL) {
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If prev_object was charged, then this mapping,
|
2016-05-02 20:16:29 +00:00
|
|
|
* although not charged now, may become writable
|
2010-12-02 17:37:16 +00:00
|
|
|
* later. Non-NULL cred in the object would prevent
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
* swap reservation during enabling of the write
|
|
|
|
* access, so reserve swap now. Failed reservation
|
|
|
|
* cause allocation of the separate object for the map
|
|
|
|
* entry, and swap reservation for this entry is
|
|
|
|
* managed in appropriate time.
|
|
|
|
*/
|
2010-12-02 17:37:16 +00:00
|
|
|
if (!reserved && !swap_reserve_by_cred(ptoa(next_size),
|
|
|
|
prev_object->cred)) {
|
2016-05-29 15:46:19 +00:00
|
|
|
VM_OBJECT_WUNLOCK(prev_object);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
return (FALSE);
|
|
|
|
}
|
|
|
|
prev_object->charge += ptoa(next_size);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Remove any pages that may still be in the object from a previous
|
|
|
|
* deallocation.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-05-16 05:07:34 +00:00
|
|
|
if (next_pindex < prev_object->size) {
|
2011-06-29 16:40:41 +00:00
|
|
|
vm_object_page_remove(prev_object, next_pindex, next_pindex +
|
|
|
|
next_size, 0);
|
1999-05-16 05:07:34 +00:00
|
|
|
if (prev_object->type == OBJT_SWAP)
|
|
|
|
swap_pager_freespace(prev_object,
|
|
|
|
next_pindex, next_size);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
#if 0
|
2010-12-02 17:37:16 +00:00
|
|
|
if (prev_object->cred != NULL) {
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
KASSERT(prev_object->charge >=
|
|
|
|
ptoa(prev_object->size - next_pindex),
|
|
|
|
("object %p overcharged 1 %jx %jx", prev_object,
|
|
|
|
(uintmax_t)next_pindex, (uintmax_t)next_size));
|
|
|
|
prev_object->charge -= ptoa(prev_object->size -
|
|
|
|
next_pindex);
|
|
|
|
}
|
|
|
|
#endif
|
1999-05-16 05:07:34 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Extend the object if necessary.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-05-16 05:07:34 +00:00
|
|
|
if (next_pindex + next_size > prev_object->size)
|
|
|
|
prev_object->size = next_pindex + next_size;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_WUNLOCK(prev_object);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (TRUE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2001-10-26 16:27:54 +00:00
|
|
|
void
|
|
|
|
vm_object_set_writeable_dirty(vm_object_t object)
|
|
|
|
{
|
|
|
|
|
2013-03-09 02:32:23 +00:00
|
|
|
VM_OBJECT_ASSERT_WLOCKED(object);
|
2015-01-28 10:37:23 +00:00
|
|
|
if (object->type != OBJT_VNODE) {
|
|
|
|
if ((object->flags & OBJ_TMPFS_NODE) != 0) {
|
|
|
|
KASSERT(object->type == OBJT_SWAP, ("non-swap tmpfs"));
|
|
|
|
vm_object_set_flag(object, OBJ_TMPFS_DIRTY);
|
|
|
|
}
|
2010-12-29 12:53:53 +00:00
|
|
|
return;
|
2015-01-28 10:37:23 +00:00
|
|
|
}
|
2010-12-29 12:53:53 +00:00
|
|
|
object->generation++;
|
|
|
|
if ((object->flags & OBJ_MIGHTBEDIRTY) != 0)
|
2005-03-17 12:03:42 +00:00
|
|
|
return;
|
2006-07-21 06:40:29 +00:00
|
|
|
vm_object_set_flag(object, OBJ_MIGHTBEDIRTY);
|
2001-10-26 16:27:54 +00:00
|
|
|
}
|
|
|
|
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
/*
|
|
|
|
* vm_object_unwire:
|
|
|
|
*
|
|
|
|
* For each page offset within the specified range of the given object,
|
|
|
|
* find the highest-level page in the shadow chain and unwire it. A page
|
|
|
|
* must exist at every page offset, and the highest-level page must be
|
|
|
|
* wired.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
vm_object_unwire(vm_object_t object, vm_ooffset_t offset, vm_size_t length,
|
|
|
|
uint8_t queue)
|
|
|
|
{
|
On munlock(), unwire correct page.
It is possible, for complex fork()/collapse situations, to have
sibling address spaces to partially share shadow chains. If one
sibling performs wiring, it can happen that a transient page, invalid
and busy, is installed into a shadow object which is visible to other
sibling for the duration of vm_fault_hold(). When the backing object
contains the valid page, and the wiring is performed on read-only
entry, the transient page is eventually removed.
But the sibling which observed the transient page might perform the
unwire, executing vm_object_unwire(). There, the first page found in
the shadow chain is considered as the page that was wired for the
mapping. It is really the page below it which is wired. So we unwire
the wrong page, either triggering the asserts of breaking the page'
wire counter.
As the fix, wait for the busy state to finish if we find such page
during unwire, and restart the shadow chain walk after the sleep.
Reported and tested by: pho
Reviewed by: markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D14184
2018-02-05 12:49:20 +00:00
|
|
|
vm_object_t tobject, t1object;
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
vm_page_t m, tm;
|
|
|
|
vm_pindex_t end_pindex, pindex, tpindex;
|
|
|
|
int depth, locked_depth;
|
|
|
|
|
|
|
|
KASSERT((offset & PAGE_MASK) == 0,
|
|
|
|
("vm_object_unwire: offset is not page aligned"));
|
|
|
|
KASSERT((length & PAGE_MASK) == 0,
|
|
|
|
("vm_object_unwire: length is not a multiple of PAGE_SIZE"));
|
|
|
|
/* The wired count of a fictitious page never changes. */
|
|
|
|
if ((object->flags & OBJ_FICTITIOUS) != 0)
|
|
|
|
return;
|
|
|
|
pindex = OFF_TO_IDX(offset);
|
|
|
|
end_pindex = pindex + atop(length);
|
On munlock(), unwire correct page.
It is possible, for complex fork()/collapse situations, to have
sibling address spaces to partially share shadow chains. If one
sibling performs wiring, it can happen that a transient page, invalid
and busy, is installed into a shadow object which is visible to other
sibling for the duration of vm_fault_hold(). When the backing object
contains the valid page, and the wiring is performed on read-only
entry, the transient page is eventually removed.
But the sibling which observed the transient page might perform the
unwire, executing vm_object_unwire(). There, the first page found in
the shadow chain is considered as the page that was wired for the
mapping. It is really the page below it which is wired. So we unwire
the wrong page, either triggering the asserts of breaking the page'
wire counter.
As the fix, wait for the busy state to finish if we find such page
during unwire, and restart the shadow chain walk after the sleep.
Reported and tested by: pho
Reviewed by: markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D14184
2018-02-05 12:49:20 +00:00
|
|
|
again:
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
locked_depth = 1;
|
|
|
|
VM_OBJECT_RLOCK(object);
|
|
|
|
m = vm_page_find_least(object, pindex);
|
|
|
|
while (pindex < end_pindex) {
|
|
|
|
if (m == NULL || pindex < m->pindex) {
|
|
|
|
/*
|
|
|
|
* The first object in the shadow chain doesn't
|
|
|
|
* contain a page at the current index. Therefore,
|
|
|
|
* the page must exist in a backing object.
|
|
|
|
*/
|
|
|
|
tobject = object;
|
|
|
|
tpindex = pindex;
|
|
|
|
depth = 0;
|
|
|
|
do {
|
|
|
|
tpindex +=
|
|
|
|
OFF_TO_IDX(tobject->backing_object_offset);
|
|
|
|
tobject = tobject->backing_object;
|
|
|
|
KASSERT(tobject != NULL,
|
|
|
|
("vm_object_unwire: missing page"));
|
|
|
|
if ((tobject->flags & OBJ_FICTITIOUS) != 0)
|
|
|
|
goto next_page;
|
|
|
|
depth++;
|
|
|
|
if (depth == locked_depth) {
|
|
|
|
locked_depth++;
|
|
|
|
VM_OBJECT_RLOCK(tobject);
|
|
|
|
}
|
|
|
|
} while ((tm = vm_page_lookup(tobject, tpindex)) ==
|
|
|
|
NULL);
|
|
|
|
} else {
|
|
|
|
tm = m;
|
|
|
|
m = TAILQ_NEXT(m, listq);
|
|
|
|
}
|
|
|
|
vm_page_lock(tm);
|
On munlock(), unwire correct page.
It is possible, for complex fork()/collapse situations, to have
sibling address spaces to partially share shadow chains. If one
sibling performs wiring, it can happen that a transient page, invalid
and busy, is installed into a shadow object which is visible to other
sibling for the duration of vm_fault_hold(). When the backing object
contains the valid page, and the wiring is performed on read-only
entry, the transient page is eventually removed.
But the sibling which observed the transient page might perform the
unwire, executing vm_object_unwire(). There, the first page found in
the shadow chain is considered as the page that was wired for the
mapping. It is really the page below it which is wired. So we unwire
the wrong page, either triggering the asserts of breaking the page'
wire counter.
As the fix, wait for the busy state to finish if we find such page
during unwire, and restart the shadow chain walk after the sleep.
Reported and tested by: pho
Reviewed by: markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D14184
2018-02-05 12:49:20 +00:00
|
|
|
if (vm_page_xbusied(tm)) {
|
|
|
|
for (tobject = object; locked_depth >= 1;
|
|
|
|
locked_depth--) {
|
|
|
|
t1object = tobject->backing_object;
|
|
|
|
VM_OBJECT_RUNLOCK(tobject);
|
|
|
|
tobject = t1object;
|
|
|
|
}
|
|
|
|
vm_page_busy_sleep(tm, "unwbo", true);
|
|
|
|
goto again;
|
|
|
|
}
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
vm_page_unwire(tm, queue);
|
|
|
|
vm_page_unlock(tm);
|
|
|
|
next_page:
|
|
|
|
pindex++;
|
|
|
|
}
|
|
|
|
/* Release the accumulated object locks. */
|
On munlock(), unwire correct page.
It is possible, for complex fork()/collapse situations, to have
sibling address spaces to partially share shadow chains. If one
sibling performs wiring, it can happen that a transient page, invalid
and busy, is installed into a shadow object which is visible to other
sibling for the duration of vm_fault_hold(). When the backing object
contains the valid page, and the wiring is performed on read-only
entry, the transient page is eventually removed.
But the sibling which observed the transient page might perform the
unwire, executing vm_object_unwire(). There, the first page found in
the shadow chain is considered as the page that was wired for the
mapping. It is really the page below it which is wired. So we unwire
the wrong page, either triggering the asserts of breaking the page'
wire counter.
As the fix, wait for the busy state to finish if we find such page
during unwire, and restart the shadow chain walk after the sleep.
Reported and tested by: pho
Reviewed by: markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D14184
2018-02-05 12:49:20 +00:00
|
|
|
for (tobject = object; locked_depth >= 1; locked_depth--) {
|
|
|
|
t1object = tobject->backing_object;
|
|
|
|
VM_OBJECT_RUNLOCK(tobject);
|
|
|
|
tobject = t1object;
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-02 18:37:04 +00:00
|
|
|
struct vnode *
|
|
|
|
vm_object_vnode(vm_object_t object)
|
|
|
|
{
|
|
|
|
|
|
|
|
VM_OBJECT_ASSERT_LOCKED(object);
|
|
|
|
if (object->type == OBJT_VNODE)
|
|
|
|
return (object->handle);
|
|
|
|
if (object->type == OBJT_SWAP && (object->flags & OBJ_TMPFS) != 0)
|
|
|
|
return (object->un_pager.swp.swp_tmpfs);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
|
2015-05-27 18:11:05 +00:00
|
|
|
static int
|
|
|
|
sysctl_vm_object_list(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
2017-07-22 13:33:06 +00:00
|
|
|
struct kinfo_vmobject *kvo;
|
2015-05-27 18:11:05 +00:00
|
|
|
char *fullpath, *freepath;
|
|
|
|
struct vnode *vp;
|
|
|
|
struct vattr va;
|
|
|
|
vm_object_t obj;
|
|
|
|
vm_page_t m;
|
|
|
|
int count, error;
|
|
|
|
|
|
|
|
if (req->oldptr == NULL) {
|
|
|
|
/*
|
|
|
|
* If an old buffer has not been provided, generate an
|
|
|
|
* estimate of the space needed for a subsequent call.
|
|
|
|
*/
|
|
|
|
mtx_lock(&vm_object_list_mtx);
|
|
|
|
count = 0;
|
|
|
|
TAILQ_FOREACH(obj, &vm_object_list, object_list) {
|
|
|
|
if (obj->type == OBJT_DEAD)
|
|
|
|
continue;
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
mtx_unlock(&vm_object_list_mtx);
|
|
|
|
return (SYSCTL_OUT(req, NULL, sizeof(struct kinfo_vmobject) *
|
|
|
|
count * 11 / 10));
|
|
|
|
}
|
|
|
|
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo = malloc(sizeof(*kvo), M_TEMP, M_WAITOK);
|
2015-05-27 18:11:05 +00:00
|
|
|
error = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* VM objects are type stable and are never removed from the
|
|
|
|
* list once added. This allows us to safely read obj->object_list
|
|
|
|
* after reacquiring the VM object lock.
|
|
|
|
*/
|
|
|
|
mtx_lock(&vm_object_list_mtx);
|
|
|
|
TAILQ_FOREACH(obj, &vm_object_list, object_list) {
|
|
|
|
if (obj->type == OBJT_DEAD)
|
|
|
|
continue;
|
|
|
|
VM_OBJECT_RLOCK(obj);
|
|
|
|
if (obj->type == OBJT_DEAD) {
|
|
|
|
VM_OBJECT_RUNLOCK(obj);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
mtx_unlock(&vm_object_list_mtx);
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_size = ptoa(obj->size);
|
|
|
|
kvo->kvo_resident = obj->resident_page_count;
|
|
|
|
kvo->kvo_ref_count = obj->ref_count;
|
|
|
|
kvo->kvo_shadow_count = obj->shadow_count;
|
|
|
|
kvo->kvo_memattr = obj->memattr;
|
|
|
|
kvo->kvo_active = 0;
|
|
|
|
kvo->kvo_inactive = 0;
|
2015-05-27 18:11:05 +00:00
|
|
|
TAILQ_FOREACH(m, &obj->memq, listq) {
|
|
|
|
/*
|
|
|
|
* A page may belong to the object but be
|
|
|
|
* dequeued and set to PQ_NONE while the
|
|
|
|
* object lock is not held. This makes the
|
|
|
|
* reads of m->queue below racy, and we do not
|
|
|
|
* count pages set to PQ_NONE. However, this
|
|
|
|
* sysctl is only meant to give an
|
|
|
|
* approximation of the system anyway.
|
|
|
|
*/
|
2018-05-04 17:17:30 +00:00
|
|
|
if (m->queue == PQ_ACTIVE)
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_active++;
|
2018-05-04 17:17:30 +00:00
|
|
|
else if (m->queue == PQ_INACTIVE)
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_inactive++;
|
2015-05-27 18:11:05 +00:00
|
|
|
}
|
|
|
|
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_vn_fileid = 0;
|
|
|
|
kvo->kvo_vn_fsid = 0;
|
|
|
|
kvo->kvo_vn_fsid_freebsd11 = 0;
|
2015-05-27 18:11:05 +00:00
|
|
|
freepath = NULL;
|
|
|
|
fullpath = "";
|
|
|
|
vp = NULL;
|
|
|
|
switch (obj->type) {
|
|
|
|
case OBJT_DEFAULT:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_DEFAULT;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
case OBJT_VNODE:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_VNODE;
|
2015-05-27 18:11:05 +00:00
|
|
|
vp = obj->handle;
|
|
|
|
vref(vp);
|
|
|
|
break;
|
|
|
|
case OBJT_SWAP:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_SWAP;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
case OBJT_DEVICE:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_DEVICE;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
case OBJT_PHYS:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_PHYS;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
case OBJT_DEAD:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_DEAD;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
case OBJT_SG:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_SG;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
case OBJT_MGTDEVICE:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_MGTDEVICE;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
default:
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_type = KVME_TYPE_UNKNOWN;
|
2015-05-27 18:11:05 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
VM_OBJECT_RUNLOCK(obj);
|
|
|
|
if (vp != NULL) {
|
|
|
|
vn_fullpath(curthread, vp, &fullpath, &freepath);
|
|
|
|
vn_lock(vp, LK_SHARED | LK_RETRY);
|
|
|
|
if (VOP_GETATTR(vp, &va, curthread->td_ucred) == 0) {
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_vn_fileid = va.va_fileid;
|
|
|
|
kvo->kvo_vn_fsid = va.va_fsid;
|
|
|
|
kvo->kvo_vn_fsid_freebsd11 = va.va_fsid;
|
Commit the 64-bit inode project.
Extend the ino_t, dev_t, nlink_t types to 64-bit ints. Modify
struct dirent layout to add d_off, increase the size of d_fileno
to 64-bits, increase the size of d_namlen to 16-bits, and change
the required alignment. Increase struct statfs f_mntfromname[] and
f_mntonname[] array length MNAMELEN to 1024.
ABI breakage is mitigated by providing compatibility using versioned
symbols, ingenious use of the existing padding in structures, and
by employing other tricks. Unfortunately, not everything can be
fixed, especially outside the base system. For instance, third-party
APIs which pass struct stat around are broken in backward and
forward incompatible ways.
Kinfo sysctl MIBs ABI is changed in backward-compatible way, but
there is no general mechanism to handle other sysctl MIBS which
return structures where the layout has changed. It was considered
that the breakage is either in the management interfaces, where we
usually allow ABI slip, or is not important.
Struct xvnode changed layout, no compat shims are provided.
For struct xtty, dev_t tty device member was reduced to uint32_t.
It was decided that keeping ABI compat in this case is more useful
than reporting 64-bit dev_t, for the sake of pstat.
Update note: strictly follow the instructions in UPDATING. Build
and install the new kernel with COMPAT_FREEBSD11 option enabled,
then reboot, and only then install new world.
Credits: The 64-bit inode project, also known as ino64, started life
many years ago as a project by Gleb Kurtsou (gleb). Kirk McKusick
(mckusick) then picked up and updated the patch, and acted as a
flag-waver. Feedback, suggestions, and discussions were carried
by Ed Maste (emaste), John Baldwin (jhb), Jilles Tjoelker (jilles),
and Rick Macklem (rmacklem). Kris Moore (kris) performed an initial
ports investigation followed by an exp-run by Antoine Brodin (antoine).
Essential and all-embracing testing was done by Peter Holm (pho).
The heavy lifting of coordinating all these efforts and bringing the
project to completion were done by Konstantin Belousov (kib).
Sponsored by: The FreeBSD Foundation (emaste, kib)
Differential revision: https://reviews.freebsd.org/D10439
2017-05-23 09:29:05 +00:00
|
|
|
/* truncate */
|
2015-05-27 18:11:05 +00:00
|
|
|
}
|
|
|
|
vput(vp);
|
|
|
|
}
|
|
|
|
|
2017-07-22 13:33:06 +00:00
|
|
|
strlcpy(kvo->kvo_path, fullpath, sizeof(kvo->kvo_path));
|
2015-05-27 18:11:05 +00:00
|
|
|
if (freepath != NULL)
|
|
|
|
free(freepath, M_TEMP);
|
|
|
|
|
|
|
|
/* Pack record size down */
|
2017-07-22 13:33:06 +00:00
|
|
|
kvo->kvo_structsize = offsetof(struct kinfo_vmobject, kvo_path)
|
|
|
|
+ strlen(kvo->kvo_path) + 1;
|
|
|
|
kvo->kvo_structsize = roundup(kvo->kvo_structsize,
|
2015-05-27 18:11:05 +00:00
|
|
|
sizeof(uint64_t));
|
2017-07-22 13:33:06 +00:00
|
|
|
error = SYSCTL_OUT(req, kvo, kvo->kvo_structsize);
|
2015-05-27 18:11:05 +00:00
|
|
|
mtx_lock(&vm_object_list_mtx);
|
|
|
|
if (error)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
mtx_unlock(&vm_object_list_mtx);
|
2017-07-22 13:33:06 +00:00
|
|
|
free(kvo, M_TEMP);
|
2015-05-27 18:11:05 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
SYSCTL_PROC(_vm, OID_AUTO, objects, CTLTYPE_STRUCT | CTLFLAG_RW | CTLFLAG_SKIP |
|
|
|
|
CTLFLAG_MPSAFE, NULL, 0, sysctl_vm_object_list, "S,kinfo_vmobject",
|
|
|
|
"List of VM objects");
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
#include "opt_ddb.h"
|
1995-04-16 12:56:22 +00:00
|
|
|
#ifdef DDB
|
1996-09-14 11:54:59 +00:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
|
1999-08-09 10:35:05 +00:00
|
|
|
#include <sys/cons.h>
|
1996-09-14 11:54:59 +00:00
|
|
|
|
|
|
|
#include <ddb/ddb.h>
|
|
|
|
|
1995-12-03 12:18:39 +00:00
|
|
|
static int
|
2001-07-04 20:15:18 +00:00
|
|
|
_vm_object_in_map(vm_map_t map, vm_object_t object, vm_map_entry_t entry)
|
1995-02-02 09:09:15 +00:00
|
|
|
{
|
|
|
|
vm_map_t tmpm;
|
|
|
|
vm_map_entry_t tmpe;
|
|
|
|
vm_object_t obj;
|
|
|
|
int entcount;
|
|
|
|
|
|
|
|
if (map == 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (entry == 0) {
|
|
|
|
tmpe = map->header.next;
|
|
|
|
entcount = map->nentries;
|
|
|
|
while (entcount-- && (tmpe != &map->header)) {
|
2002-03-10 21:52:48 +00:00
|
|
|
if (_vm_object_in_map(map, object, tmpe)) {
|
1995-02-02 09:09:15 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
tmpe = tmpe->next;
|
|
|
|
}
|
1999-02-07 21:48:23 +00:00
|
|
|
} else if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
|
|
|
tmpm = entry->object.sub_map;
|
1995-02-02 09:09:15 +00:00
|
|
|
tmpe = tmpm->header.next;
|
|
|
|
entcount = tmpm->nentries;
|
|
|
|
while (entcount-- && tmpe != &tmpm->header) {
|
2002-03-10 21:52:48 +00:00
|
|
|
if (_vm_object_in_map(tmpm, object, tmpe)) {
|
1995-02-02 09:09:15 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
tmpe = tmpe->next;
|
|
|
|
}
|
1999-01-28 00:57:57 +00:00
|
|
|
} else if ((obj = entry->object.vm_object) != NULL) {
|
2001-07-04 19:00:13 +00:00
|
|
|
for (; obj; obj = obj->backing_object)
|
2002-03-10 21:52:48 +00:00
|
|
|
if (obj == object) {
|
1995-02-02 09:09:15 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
1995-12-03 12:18:39 +00:00
|
|
|
static int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_in_map(vm_object_t object)
|
1995-02-02 09:09:15 +00:00
|
|
|
{
|
|
|
|
struct proc *p;
|
2001-03-28 11:52:56 +00:00
|
|
|
|
2001-05-23 22:42:10 +00:00
|
|
|
/* sx_slock(&allproc_lock); */
|
2007-01-17 15:05:52 +00:00
|
|
|
FOREACH_PROC_IN_SYSTEM(p) {
|
2002-03-10 21:52:48 +00:00
|
|
|
if (!p->p_vmspace /* || (p->p_flag & (P_SYSTEM|P_WEXIT)) */)
|
1995-02-02 09:09:15 +00:00
|
|
|
continue;
|
2002-03-10 21:52:48 +00:00
|
|
|
if (_vm_object_in_map(&p->p_vmspace->vm_map, object, 0)) {
|
2001-05-23 22:42:10 +00:00
|
|
|
/* sx_sunlock(&allproc_lock); */
|
1995-02-02 09:09:15 +00:00
|
|
|
return 1;
|
2000-11-22 07:42:04 +00:00
|
|
|
}
|
1995-02-02 09:09:15 +00:00
|
|
|
}
|
2001-05-23 22:42:10 +00:00
|
|
|
/* sx_sunlock(&allproc_lock); */
|
2002-03-10 21:52:48 +00:00
|
|
|
if (_vm_object_in_map(kernel_map, object, 0))
|
1995-02-02 09:09:15 +00:00
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
DB_SHOW_COMMAND(vmochk, vm_object_check)
|
1995-12-14 09:55:16 +00:00
|
|
|
{
|
1995-02-02 09:09:15 +00:00
|
|
|
vm_object_t object;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure that internal objs are in a map somewhere
|
|
|
|
* and none have zero ref counts.
|
|
|
|
*/
|
2001-04-15 10:22:04 +00:00
|
|
|
TAILQ_FOREACH(object, &vm_object_list, object_list) {
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
if (object->handle == NULL &&
|
|
|
|
(object->type == OBJT_DEFAULT || object->type == OBJT_SWAP)) {
|
1995-05-02 05:57:11 +00:00
|
|
|
if (object->ref_count == 0) {
|
1999-07-01 19:53:43 +00:00
|
|
|
db_printf("vmochk: internal obj has zero ref count: %ld\n",
|
|
|
|
(long)object->size);
|
1995-02-02 09:09:15 +00:00
|
|
|
}
|
1995-05-02 05:57:11 +00:00
|
|
|
if (!vm_object_in_map(object)) {
|
1998-07-11 11:30:46 +00:00
|
|
|
db_printf(
|
|
|
|
"vmochk: internal obj is not in a map: "
|
|
|
|
"ref: %d, size: %lu: 0x%lx, backing_object: %p\n",
|
|
|
|
object->ref_count, (u_long)object->size,
|
|
|
|
(u_long)object->size,
|
|
|
|
(void *)object->backing_object);
|
1995-02-02 09:09:15 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_object_print: [ debug ]
|
|
|
|
*/
|
1996-09-14 11:54:59 +00:00
|
|
|
DB_SHOW_COMMAND(object, vm_object_print_static)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1996-09-14 11:54:59 +00:00
|
|
|
/* XXX convert args. */
|
|
|
|
vm_object_t object = (vm_object_t)addr;
|
|
|
|
boolean_t full = have_addr;
|
|
|
|
|
1999-02-08 05:15:54 +00:00
|
|
|
vm_page_t p;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
/* XXX count is an (unused) arg. Avoid shadowing it. */
|
|
|
|
#define count was_count
|
|
|
|
|
1999-02-08 05:15:54 +00:00
|
|
|
int count;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if (object == NULL)
|
|
|
|
return;
|
|
|
|
|
1998-07-14 12:26:15 +00:00
|
|
|
db_iprintf(
|
2010-12-02 17:37:16 +00:00
|
|
|
"Object %p: type=%d, size=0x%jx, res=%d, ref=%d, flags=0x%x ruid %d charge %jx\n",
|
2002-11-07 23:03:04 +00:00
|
|
|
object, (int)object->type, (uintmax_t)object->size,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->resident_page_count, object->ref_count, object->flags,
|
2010-12-02 17:37:16 +00:00
|
|
|
object->cred ? object->cred->cr_ruid : -1, (uintmax_t)object->charge);
|
2002-11-07 23:03:04 +00:00
|
|
|
db_iprintf(" sref=%d, backing_object(%d)=(%p)+0x%jx\n",
|
1999-01-21 08:29:12 +00:00
|
|
|
object->shadow_count,
|
1998-07-14 12:26:15 +00:00
|
|
|
object->backing_object ? object->backing_object->ref_count : 0,
|
2002-11-07 23:03:04 +00:00
|
|
|
object->backing_object, (uintmax_t)object->backing_object_offset);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if (!full)
|
|
|
|
return;
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
count = 0;
|
2001-02-04 13:13:25 +00:00
|
|
|
TAILQ_FOREACH(p, &object->memq, listq) {
|
1994-05-24 10:09:53 +00:00
|
|
|
if (count == 0)
|
1996-09-14 11:54:59 +00:00
|
|
|
db_iprintf("memory:=");
|
1994-05-24 10:09:53 +00:00
|
|
|
else if (count == 6) {
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf("\n");
|
|
|
|
db_iprintf(" ...");
|
1994-05-24 10:09:53 +00:00
|
|
|
count = 0;
|
|
|
|
} else
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf(",");
|
1994-05-24 10:09:53 +00:00
|
|
|
count++;
|
|
|
|
|
2002-11-07 23:03:04 +00:00
|
|
|
db_printf("(off=0x%jx,page=0x%jx)",
|
|
|
|
(uintmax_t)p->pindex, (uintmax_t)VM_PAGE_TO_PHYS(p));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
if (count != 0)
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf("\n");
|
|
|
|
db_indent -= 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-09-08 20:44:49 +00:00
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
/* XXX. */
|
|
|
|
#undef count
|
|
|
|
|
|
|
|
/* XXX need this non-static entry for calling from vm_map_print. */
|
1996-09-08 20:44:49 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_object_print(
|
|
|
|
/* db_expr_t */ long addr,
|
|
|
|
boolean_t have_addr,
|
|
|
|
/* db_expr_t */ long count,
|
|
|
|
char *modif)
|
1996-09-14 11:54:59 +00:00
|
|
|
{
|
|
|
|
vm_object_print_static(addr, have_addr, count, modif);
|
|
|
|
}
|
|
|
|
|
|
|
|
DB_SHOW_COMMAND(vmopag, vm_object_print_pages)
|
1996-09-08 20:44:49 +00:00
|
|
|
{
|
|
|
|
vm_object_t object;
|
2009-04-23 21:09:47 +00:00
|
|
|
vm_pindex_t fidx;
|
|
|
|
vm_paddr_t pa;
|
|
|
|
vm_page_t m, prev_m;
|
|
|
|
int rcount, nl, c;
|
2001-04-15 10:22:04 +00:00
|
|
|
|
2009-04-23 21:09:47 +00:00
|
|
|
nl = 0;
|
2001-04-15 10:22:04 +00:00
|
|
|
TAILQ_FOREACH(object, &vm_object_list, object_list) {
|
1998-07-11 11:30:46 +00:00
|
|
|
db_printf("new object: %p\n", (void *)object);
|
2002-03-10 21:52:48 +00:00
|
|
|
if (nl > 18) {
|
1996-09-08 20:44:49 +00:00
|
|
|
c = cngetc();
|
|
|
|
if (c != ' ')
|
|
|
|
return;
|
|
|
|
nl = 0;
|
|
|
|
}
|
|
|
|
nl++;
|
|
|
|
rcount = 0;
|
|
|
|
fidx = 0;
|
2009-04-23 21:09:47 +00:00
|
|
|
pa = -1;
|
|
|
|
TAILQ_FOREACH(m, &object->memq, listq) {
|
|
|
|
if (m->pindex > 128)
|
|
|
|
break;
|
|
|
|
if ((prev_m = TAILQ_PREV(m, pglist, listq)) != NULL &&
|
|
|
|
prev_m->pindex + 1 != m->pindex) {
|
1996-09-08 20:44:49 +00:00
|
|
|
if (rcount) {
|
1999-07-01 19:53:43 +00:00
|
|
|
db_printf(" index(%ld)run(%d)pa(0x%lx)\n",
|
|
|
|
(long)fidx, rcount, (long)pa);
|
2002-03-10 21:52:48 +00:00
|
|
|
if (nl > 18) {
|
1996-09-08 20:44:49 +00:00
|
|
|
c = cngetc();
|
|
|
|
if (c != ' ')
|
|
|
|
return;
|
|
|
|
nl = 0;
|
|
|
|
}
|
|
|
|
nl++;
|
|
|
|
rcount = 0;
|
|
|
|
}
|
2009-04-23 21:09:47 +00:00
|
|
|
}
|
1996-09-08 20:44:49 +00:00
|
|
|
if (rcount &&
|
|
|
|
(VM_PAGE_TO_PHYS(m) == pa + rcount * PAGE_SIZE)) {
|
|
|
|
++rcount;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (rcount) {
|
Enable the new physical memory allocator.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
2007-06-16 04:57:06 +00:00
|
|
|
db_printf(" index(%ld)run(%d)pa(0x%lx)\n",
|
1999-07-01 19:53:43 +00:00
|
|
|
(long)fidx, rcount, (long)pa);
|
2002-03-10 21:52:48 +00:00
|
|
|
if (nl > 18) {
|
1996-09-08 20:44:49 +00:00
|
|
|
c = cngetc();
|
|
|
|
if (c != ' ')
|
|
|
|
return;
|
|
|
|
nl = 0;
|
|
|
|
}
|
|
|
|
nl++;
|
|
|
|
}
|
2009-04-23 21:09:47 +00:00
|
|
|
fidx = m->pindex;
|
1996-09-08 20:44:49 +00:00
|
|
|
pa = VM_PAGE_TO_PHYS(m);
|
|
|
|
rcount = 1;
|
|
|
|
}
|
|
|
|
if (rcount) {
|
1999-07-01 19:53:43 +00:00
|
|
|
db_printf(" index(%ld)run(%d)pa(0x%lx)\n",
|
|
|
|
(long)fidx, rcount, (long)pa);
|
2002-03-10 21:52:48 +00:00
|
|
|
if (nl > 18) {
|
1996-09-08 20:44:49 +00:00
|
|
|
c = cngetc();
|
|
|
|
if (c != ' ')
|
|
|
|
return;
|
|
|
|
nl = 0;
|
|
|
|
}
|
|
|
|
nl++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
1995-04-16 12:56:22 +00:00
|
|
|
#endif /* DDB */
|