These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* The Mach Operating System project at Carnegie-Mellon University.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
2000-03-27 20:41:17 +00:00
|
|
|
* must display the following acknowledgement:
|
1994-05-24 10:09:53 +00:00
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1994-08-02 07:55:43 +00:00
|
|
|
* from: @(#)vm_map.c 8.3 (Berkeley) 1/12/94
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Copyright (c) 1987, 1990 Carnegie-Mellon University.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Authors: Avadis Tevanian, Jr., Michael Wayne Young
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Permission to use, copy, modify and distribute this software and
|
|
|
|
* its documentation is hereby granted, provided that both the copyright
|
|
|
|
* notice and this permission notice appear in all copies of the
|
|
|
|
* software, derivative works or modified versions, and any portions
|
|
|
|
* thereof, and that both notices appear in supporting documentation.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
|
|
|
* CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
|
|
|
|
* CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
|
1994-05-24 10:09:53 +00:00
|
|
|
* FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Carnegie Mellon requests users of this software to return to
|
|
|
|
*
|
|
|
|
* Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
|
|
|
|
* School of Computer Science
|
|
|
|
* Carnegie Mellon University
|
|
|
|
* Pittsburgh PA 15213-3890
|
|
|
|
*
|
|
|
|
* any improvements or extensions that they make and grant Carnegie the
|
|
|
|
* rights to redistribute these changes.
|
1994-08-02 07:55:43 +00:00
|
|
|
*
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Virtual memory mapping module.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
2001-10-11 17:53:43 +00:00
|
|
|
#include <sys/ktr.h>
|
2001-05-01 08:13:21 +00:00
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/mutex.h>
|
1995-03-16 18:17:34 +00:00
|
|
|
#include <sys/proc.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <sys/vmmeter.h>
|
1996-05-19 07:36:50 +00:00
|
|
|
#include <sys/mman.h>
|
1997-12-19 09:03:37 +00:00
|
|
|
#include <sys/vnode.h>
|
1999-01-06 23:05:42 +00:00
|
|
|
#include <sys/resourcevar.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <vm/vm.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_map.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm_page.h>
|
|
|
|
#include <vm/vm_object.h>
|
1998-01-17 09:17:02 +00:00
|
|
|
#include <vm/vm_pager.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <vm/vm_kern.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_extern.h>
|
2000-12-13 10:01:00 +00:00
|
|
|
#include <vm/swap_pager.h>
|
2002-03-20 04:02:59 +00:00
|
|
|
#include <vm/uma.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Virtual memory maps provide for the mapping, protection,
|
|
|
|
* and sharing of virtual memory objects. In addition,
|
|
|
|
* this module provides for an efficient virtual copy of
|
|
|
|
* memory from one map to another.
|
|
|
|
*
|
|
|
|
* Synchronization is required prior to most operations.
|
|
|
|
*
|
|
|
|
* Maps consist of an ordered doubly-linked list of simple
|
|
|
|
* entries; a single hint is used to speed up lookups.
|
|
|
|
*
|
2000-03-26 15:20:23 +00:00
|
|
|
* Since portions of maps are specified by start/end addresses,
|
1994-05-24 10:09:53 +00:00
|
|
|
* which may not align with existing map entries, all
|
|
|
|
* routines merely "clip" entries to these start/end values.
|
|
|
|
* [That is, an entry is split into two, bordering at a
|
|
|
|
* start or end value.] Note that these clippings may not
|
|
|
|
* always be necessary (as the two resulting entries are then
|
|
|
|
* not changed); however, the clipping is done for convenience.
|
|
|
|
*
|
|
|
|
* As mentioned above, virtual copy operations are performed
|
1999-03-27 23:46:04 +00:00
|
|
|
* by copying VM object references from one map to
|
1994-05-24 10:09:53 +00:00
|
|
|
* another, and then marking both regions as copy-on-write.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_startup:
|
|
|
|
*
|
|
|
|
* Initialize the vm_map module. Must be called before
|
|
|
|
* any other vm_map routines.
|
|
|
|
*
|
|
|
|
* Map and entry structures are allocated from the general
|
|
|
|
* purpose memory pool with some exceptions:
|
|
|
|
*
|
|
|
|
* - The kernel map and kmem submap are allocated statically.
|
|
|
|
* - Kernel map entries are allocated out of a static pool.
|
|
|
|
*
|
|
|
|
* These restrictions are necessary since malloc() uses the
|
|
|
|
* maps and requires map entries.
|
|
|
|
*/
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
static uma_zone_t mapentzone;
|
|
|
|
static uma_zone_t kmapentzone;
|
|
|
|
static uma_zone_t mapzone;
|
|
|
|
static uma_zone_t vmspace_zone;
|
|
|
|
static struct vm_object kmapentobj;
|
|
|
|
static void vmspace_zinit(void *mem, int size);
|
|
|
|
static void vmspace_zfini(void *mem, int size);
|
|
|
|
static void vm_map_zinit(void *mem, int size);
|
|
|
|
static void vm_map_zfini(void *mem, int size);
|
|
|
|
static void _vm_map_init(vm_map_t map, vm_offset_t min, vm_offset_t max);
|
|
|
|
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
static void vm_map_zdtor(void *mem, int size, void *arg);
|
|
|
|
static void vmspace_zdtor(void *mem, int size, void *arg);
|
|
|
|
#endif
|
1996-05-18 03:38:05 +00:00
|
|
|
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_startup(void)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2002-03-19 09:11:49 +00:00
|
|
|
mapzone = uma_zcreate("MAP", sizeof(struct vm_map), NULL,
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
vm_map_zdtor,
|
|
|
|
#else
|
|
|
|
NULL,
|
|
|
|
#endif
|
|
|
|
vm_map_zinit, vm_map_zfini, UMA_ALIGN_PTR, UMA_ZONE_NOFREE);
|
|
|
|
uma_prealloc(mapzone, MAX_KMAP);
|
2002-03-20 04:02:59 +00:00
|
|
|
kmapentzone = uma_zcreate("KMAP ENTRY", sizeof(struct vm_map_entry),
|
2002-04-29 23:45:41 +00:00
|
|
|
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_MTXCLASS);
|
2002-03-20 04:02:59 +00:00
|
|
|
uma_prealloc(kmapentzone, MAX_KMAPENT);
|
|
|
|
mapentzone = uma_zcreate("MAP ENTRY", sizeof(struct vm_map_entry),
|
|
|
|
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, 0);
|
2002-03-19 09:11:49 +00:00
|
|
|
uma_prealloc(mapentzone, MAX_MAPENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vmspace_zfini(void *mem, int size)
|
|
|
|
{
|
|
|
|
struct vmspace *vm;
|
|
|
|
|
|
|
|
vm = (struct vmspace *)mem;
|
|
|
|
|
|
|
|
vm_map_zfini(&vm->vm_map, sizeof(vm->vm_map));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vmspace_zinit(void *mem, int size)
|
|
|
|
{
|
|
|
|
struct vmspace *vm;
|
|
|
|
|
|
|
|
vm = (struct vmspace *)mem;
|
|
|
|
|
|
|
|
vm_map_zinit(&vm->vm_map, sizeof(vm->vm_map));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_map_zfini(void *mem, int size)
|
|
|
|
{
|
|
|
|
vm_map_t map;
|
|
|
|
|
|
|
|
map = (vm_map_t)mem;
|
|
|
|
|
|
|
|
lockdestroy(&map->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_map_zinit(void *mem, int size)
|
|
|
|
{
|
|
|
|
vm_map_t map;
|
|
|
|
|
|
|
|
map = (vm_map_t)mem;
|
|
|
|
map->nentries = 0;
|
|
|
|
map->size = 0;
|
|
|
|
map->infork = 0;
|
2002-05-02 17:32:27 +00:00
|
|
|
lockinit(&map->lock, PVM, "thrd_sleep", 0, LK_CANRECURSE | LK_NOPAUSE);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
static void
|
|
|
|
vmspace_zdtor(void *mem, int size, void *arg)
|
|
|
|
{
|
|
|
|
struct vmspace *vm;
|
|
|
|
|
|
|
|
vm = (struct vmspace *)mem;
|
|
|
|
|
|
|
|
vm_map_zdtor(&vm->vm_map, sizeof(vm->vm_map), arg);
|
|
|
|
}
|
|
|
|
static void
|
|
|
|
vm_map_zdtor(void *mem, int size, void *arg)
|
|
|
|
{
|
|
|
|
vm_map_t map;
|
|
|
|
|
|
|
|
map = (vm_map_t)mem;
|
|
|
|
KASSERT(map->nentries == 0,
|
|
|
|
("map %p nentries == %d on free.",
|
|
|
|
map, map->nentries));
|
|
|
|
KASSERT(map->size == 0,
|
|
|
|
("map %p size == %lu on free.",
|
2002-03-19 11:49:10 +00:00
|
|
|
map, (unsigned long)map->size));
|
2002-03-19 09:11:49 +00:00
|
|
|
KASSERT(map->infork == 0,
|
|
|
|
("map %p infork == %d on free.",
|
|
|
|
map, map->infork));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2002-03-19 09:11:49 +00:00
|
|
|
#endif /* INVARIANTS */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a vmspace structure, including a vm_map and pmap,
|
|
|
|
* and initialize those structures. The refcnt is set to 1.
|
|
|
|
* The remaining fields must be initialized by the caller.
|
|
|
|
*/
|
|
|
|
struct vmspace *
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vmspace_alloc(min, max)
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_offset_t min, max;
|
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
struct vmspace *vm;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
2002-03-19 09:11:49 +00:00
|
|
|
vm = uma_zalloc(vmspace_zone, M_WAITOK);
|
2001-05-23 22:38:00 +00:00
|
|
|
CTR1(KTR_VM, "vmspace_alloc: %p", vm);
|
2002-03-19 09:11:49 +00:00
|
|
|
_vm_map_init(&vm->vm_map, min, max);
|
1999-02-19 14:25:37 +00:00
|
|
|
pmap_pinit(vmspace_pmap(vm));
|
|
|
|
vm->vm_map.pmap = vmspace_pmap(vm); /* XXX */
|
1994-05-24 10:09:53 +00:00
|
|
|
vm->vm_refcnt = 1;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm->vm_shm = NULL;
|
2002-02-05 21:23:05 +00:00
|
|
|
vm->vm_freer = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
return (vm);
|
|
|
|
}
|
|
|
|
|
1997-08-05 00:02:08 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_init2(void)
|
|
|
|
{
|
2002-03-19 09:11:49 +00:00
|
|
|
uma_zone_set_obj(kmapentzone, &kmapentobj, cnt.v_page_count / 4);
|
|
|
|
vmspace_zone = uma_zcreate("VMSPACE", sizeof(struct vmspace), NULL,
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
vmspace_zdtor,
|
|
|
|
#else
|
|
|
|
NULL,
|
|
|
|
#endif
|
|
|
|
vmspace_zinit, vmspace_zfini, UMA_ALIGN_PTR, UMA_ZONE_NOFREE);
|
1997-08-05 23:03:24 +00:00
|
|
|
pmap_init2();
|
1997-09-21 04:24:27 +00:00
|
|
|
vm_object_init2();
|
1997-08-05 00:02:08 +00:00
|
|
|
}
|
|
|
|
|
2002-02-05 21:23:05 +00:00
|
|
|
static __inline void
|
2002-03-10 21:52:48 +00:00
|
|
|
vmspace_dofree(struct vmspace *vm)
|
2002-02-05 21:23:05 +00:00
|
|
|
{
|
|
|
|
CTR1(KTR_VM, "vmspace_free: %p", vm);
|
|
|
|
/*
|
|
|
|
* Lock the map, to wait out all other references to it.
|
|
|
|
* Delete all of the mappings and pages they hold, then call
|
|
|
|
* the pmap module to reclaim anything left.
|
|
|
|
*/
|
|
|
|
vm_map_lock(&vm->vm_map);
|
|
|
|
(void) vm_map_delete(&vm->vm_map, vm->vm_map.min_offset,
|
|
|
|
vm->vm_map.max_offset);
|
|
|
|
vm_map_unlock(&vm->vm_map);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2002-02-05 21:23:05 +00:00
|
|
|
pmap_release(vmspace_pmap(vm));
|
2002-03-19 09:11:49 +00:00
|
|
|
uma_zfree(vmspace_zone, vm);
|
2002-02-05 21:23:05 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vmspace_free(struct vmspace *vm)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1995-02-02 09:09:15 +00:00
|
|
|
if (vm->vm_refcnt == 0)
|
|
|
|
panic("vmspace_free: attempt to free already freed vmspace");
|
|
|
|
|
2002-02-05 21:23:05 +00:00
|
|
|
if (--vm->vm_refcnt == 0)
|
|
|
|
vmspace_dofree(vm);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vmspace_exitfree(struct proc *p)
|
|
|
|
{
|
2002-04-17 05:26:42 +00:00
|
|
|
struct vmspace *vm;
|
1996-01-19 04:00:31 +00:00
|
|
|
|
2002-04-17 05:26:42 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
if (p == p->p_vmspace->vm_freer) {
|
|
|
|
vm = p->p_vmspace;
|
|
|
|
p->p_vmspace = NULL;
|
|
|
|
vmspace_dofree(vm);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2001-06-09 18:06:58 +00:00
|
|
|
/*
|
|
|
|
* vmspace_swap_count() - count the approximate swap useage in pages for a
|
|
|
|
* vmspace.
|
|
|
|
*
|
|
|
|
* Swap useage is determined by taking the proportional swap used by
|
|
|
|
* VM objects backing the VM map. To make up for fractional losses,
|
|
|
|
* if the VM object has any swap use at all the associated map entries
|
|
|
|
* count for at least 1 swap page.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vmspace_swap_count(struct vmspace *vmspace)
|
|
|
|
{
|
2001-06-11 19:17:05 +00:00
|
|
|
vm_map_t map = &vmspace->vm_map;
|
|
|
|
vm_map_entry_t cur;
|
|
|
|
int count = 0;
|
2001-06-09 18:06:58 +00:00
|
|
|
|
2002-04-28 06:07:54 +00:00
|
|
|
vm_map_lock_read(map);
|
2001-06-11 19:17:05 +00:00
|
|
|
for (cur = map->header.next; cur != &map->header; cur = cur->next) {
|
|
|
|
vm_object_t object;
|
2001-06-09 18:06:58 +00:00
|
|
|
|
2001-06-11 19:17:05 +00:00
|
|
|
if ((cur->eflags & MAP_ENTRY_IS_SUB_MAP) == 0 &&
|
|
|
|
(object = cur->object.vm_object) != NULL &&
|
|
|
|
object->type == OBJT_SWAP
|
|
|
|
) {
|
|
|
|
int n = (cur->end - cur->start) / PAGE_SIZE;
|
2001-06-09 18:06:58 +00:00
|
|
|
|
2001-06-11 19:17:05 +00:00
|
|
|
if (object->un_pager.swp.swp_bcount) {
|
|
|
|
count += object->un_pager.swp.swp_bcount *
|
|
|
|
SWAP_META_PAGES * n / object->size + 1;
|
|
|
|
}
|
|
|
|
}
|
2001-06-09 18:06:58 +00:00
|
|
|
}
|
2002-04-28 06:07:54 +00:00
|
|
|
vm_map_unlock_read(map);
|
2002-03-10 21:52:48 +00:00
|
|
|
return (count);
|
2001-06-09 18:06:58 +00:00
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
int error;
|
|
|
|
|
|
|
|
error = lockmgr(&map->lock, LK_EXCLUSIVE, NULL, curthread);
|
|
|
|
KASSERT(error == 0, ("%s: failed to get lock", __func__));
|
2001-07-04 20:15:18 +00:00
|
|
|
map->timestamp++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_unlock(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
|
|
|
lockmgr(&map->lock, LK_RELEASE, NULL, curthread);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock_read(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
int error;
|
|
|
|
|
|
|
|
error = lockmgr(&map->lock, LK_EXCLUSIVE, NULL, curthread);
|
|
|
|
KASSERT(error == 0, ("%s: failed to get lock", __func__));
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_unlock_read(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
|
|
|
lockmgr(&map->lock, LK_RELEASE, NULL, curthread);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2002-04-28 06:07:54 +00:00
|
|
|
int
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_trylock(vm_map_t map, const char *file, int line)
|
2002-04-28 06:07:54 +00:00
|
|
|
{
|
2002-03-18 15:08:09 +00:00
|
|
|
int error;
|
|
|
|
|
2002-05-02 17:32:27 +00:00
|
|
|
error = lockmgr(&map->lock, LK_EXCLUSIVE | LK_NOWAIT, NULL, curthread);
|
|
|
|
return (error == 0);
|
2002-03-18 15:08:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock_upgrade(vm_map_t map, const char *file, int line)
|
2002-03-18 15:08:09 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
|
|
|
KASSERT(lockstatus(&map->lock, curthread) == LK_EXCLUSIVE,
|
|
|
|
("%s: lock not held", __func__));
|
|
|
|
map->timestamp++;
|
|
|
|
return (0);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock_downgrade(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
|
|
|
KASSERT(lockstatus(&map->lock, curthread) == LK_EXCLUSIVE,
|
|
|
|
("%s: lock not held", __func__));
|
2002-03-18 15:08:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_set_recursive(vm_map_t map, const char *file, int line)
|
2002-03-18 15:08:09 +00:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_clear_recursive(vm_map_t map, const char *file, int line)
|
2002-03-18 15:08:09 +00:00
|
|
|
{
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
long
|
|
|
|
vmspace_resident_count(struct vmspace *vmspace)
|
|
|
|
{
|
|
|
|
return pmap_resident_count(vmspace_pmap(vmspace));
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_create:
|
|
|
|
*
|
|
|
|
* Creates and returns a new empty VM map with
|
|
|
|
* the given physical map structure, and having
|
|
|
|
* the given lower and upper address bounds.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
vm_map_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_create(pmap_t pmap, vm_offset_t min, vm_offset_t max)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_t result;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
result = uma_zalloc(mapzone, M_WAITOK);
|
2001-05-23 22:38:00 +00:00
|
|
|
CTR1(KTR_VM, "vm_map_create: %p", result);
|
2002-03-19 09:11:49 +00:00
|
|
|
_vm_map_init(result, min, max);
|
1994-05-24 10:09:53 +00:00
|
|
|
result->pmap = pmap;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (result);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize an existing vm_map structure
|
|
|
|
* such as that in the vmspace structure.
|
|
|
|
* The pmap is set elsewhere.
|
|
|
|
*/
|
2002-03-19 09:11:49 +00:00
|
|
|
static void
|
|
|
|
_vm_map_init(vm_map_t map, vm_offset_t min, vm_offset_t max)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-05-23 22:38:00 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
map->header.next = map->header.prev = &map->header;
|
1997-08-05 00:02:08 +00:00
|
|
|
map->system_map = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
map->min_offset = min;
|
|
|
|
map->max_offset = max;
|
|
|
|
map->first_free = &map->header;
|
2002-05-24 01:33:24 +00:00
|
|
|
map->root = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
map->timestamp = 0;
|
|
|
|
}
|
|
|
|
|
2000-10-04 01:29:17 +00:00
|
|
|
void
|
2002-03-19 09:11:49 +00:00
|
|
|
vm_map_init(vm_map_t map, vm_offset_t min, vm_offset_t max)
|
2000-10-04 01:29:17 +00:00
|
|
|
{
|
2002-03-19 09:11:49 +00:00
|
|
|
_vm_map_init(map, min, max);
|
2002-05-02 17:32:27 +00:00
|
|
|
lockinit(&map->lock, PVM, "thrd_sleep", 0, LK_CANRECURSE | LK_NOPAUSE);
|
2000-10-04 01:29:17 +00:00
|
|
|
}
|
|
|
|
|
1996-05-18 03:38:05 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_dispose: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Inverse of vm_map_entry_create.
|
|
|
|
*/
|
1996-12-07 06:19:37 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry)
|
1996-05-18 03:38:05 +00:00
|
|
|
{
|
2002-03-19 09:11:49 +00:00
|
|
|
uma_zfree((map->system_map || !mapentzone)
|
|
|
|
? kmapentzone : mapentzone, entry);
|
1996-05-18 03:38:05 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_create: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Allocates a VM map entry for insertion.
|
2001-04-12 21:50:03 +00:00
|
|
|
* No entry fields are filled in.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-12-14 09:55:16 +00:00
|
|
|
static vm_map_entry_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_create(vm_map_t map)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2000-02-16 21:11:33 +00:00
|
|
|
vm_map_entry_t new_entry;
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
new_entry = uma_zalloc((map->system_map || !mapentzone) ?
|
|
|
|
kmapentzone : mapentzone, M_WAITOK);
|
2000-02-16 21:11:33 +00:00
|
|
|
if (new_entry == NULL)
|
|
|
|
panic("vm_map_entry_create: kernel resources exhausted");
|
2002-03-10 21:52:48 +00:00
|
|
|
return (new_entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2002-06-01 16:59:30 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_set_behavior:
|
|
|
|
*
|
|
|
|
* Set the expected access behavior, either normal, random, or
|
|
|
|
* sequential.
|
|
|
|
*/
|
|
|
|
static __inline void
|
|
|
|
vm_map_entry_set_behavior(vm_map_entry_t entry, u_char behavior)
|
|
|
|
{
|
|
|
|
entry->eflags = (entry->eflags & ~MAP_ENTRY_BEHAV_MASK) |
|
|
|
|
(behavior & MAP_ENTRY_BEHAV_MASK);
|
|
|
|
}
|
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_splay:
|
|
|
|
*
|
|
|
|
* Implements Sleator and Tarjan's top-down splay algorithm. Returns
|
|
|
|
* the vm_map_entry containing the given address. If, however, that
|
|
|
|
* address is not found in the vm_map, returns a vm_map_entry that is
|
|
|
|
* adjacent to the address, coming before or after it.
|
|
|
|
*/
|
|
|
|
static vm_map_entry_t
|
|
|
|
vm_map_entry_splay(vm_offset_t address, vm_map_entry_t root)
|
|
|
|
{
|
|
|
|
struct vm_map_entry dummy;
|
|
|
|
vm_map_entry_t lefttreemax, righttreemin, y;
|
|
|
|
|
|
|
|
if (root == NULL)
|
|
|
|
return (root);
|
|
|
|
lefttreemax = righttreemin = &dummy;
|
2002-06-01 22:41:43 +00:00
|
|
|
for (;; root = y) {
|
2002-05-24 01:33:24 +00:00
|
|
|
if (address < root->start) {
|
2002-06-01 22:41:43 +00:00
|
|
|
if ((y = root->left) == NULL)
|
2002-05-24 01:33:24 +00:00
|
|
|
break;
|
2002-06-01 22:41:43 +00:00
|
|
|
if (address < y->start) {
|
2002-05-24 01:33:24 +00:00
|
|
|
/* Rotate right. */
|
|
|
|
root->left = y->right;
|
|
|
|
y->right = root;
|
|
|
|
root = y;
|
2002-06-01 22:41:43 +00:00
|
|
|
if ((y = root->left) == NULL)
|
2002-05-24 01:33:24 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* Link into the new root's right tree. */
|
|
|
|
righttreemin->left = root;
|
|
|
|
righttreemin = root;
|
|
|
|
} else if (address >= root->end) {
|
2002-06-01 22:41:43 +00:00
|
|
|
if ((y = root->right) == NULL)
|
2002-05-24 01:33:24 +00:00
|
|
|
break;
|
2002-06-01 22:41:43 +00:00
|
|
|
if (address >= y->end) {
|
2002-05-24 01:33:24 +00:00
|
|
|
/* Rotate left. */
|
|
|
|
root->right = y->left;
|
|
|
|
y->left = root;
|
|
|
|
root = y;
|
2002-06-01 22:41:43 +00:00
|
|
|
if ((y = root->right) == NULL)
|
2002-05-24 01:33:24 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* Link into the new root's left tree. */
|
|
|
|
lefttreemax->right = root;
|
|
|
|
lefttreemax = root;
|
|
|
|
} else
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* Assemble the new root. */
|
|
|
|
lefttreemax->right = root->left;
|
|
|
|
righttreemin->left = root->right;
|
|
|
|
root->left = dummy.right;
|
|
|
|
root->right = dummy.left;
|
|
|
|
return (root);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_{un,}link:
|
|
|
|
*
|
|
|
|
* Insert/remove entries from maps.
|
|
|
|
*/
|
2002-05-24 01:33:24 +00:00
|
|
|
static void
|
1999-03-21 23:37:00 +00:00
|
|
|
vm_map_entry_link(vm_map_t map,
|
|
|
|
vm_map_entry_t after_where,
|
|
|
|
vm_map_entry_t entry)
|
|
|
|
{
|
2001-05-23 22:38:00 +00:00
|
|
|
|
|
|
|
CTR4(KTR_VM,
|
|
|
|
"vm_map_entry_link: map %p, nentries %d, entry %p, after %p", map,
|
|
|
|
map->nentries, entry, after_where);
|
1999-03-21 23:37:00 +00:00
|
|
|
map->nentries++;
|
|
|
|
entry->prev = after_where;
|
|
|
|
entry->next = after_where->next;
|
|
|
|
entry->next->prev = entry;
|
|
|
|
after_where->next = entry;
|
2002-05-24 01:33:24 +00:00
|
|
|
|
|
|
|
if (after_where != &map->header) {
|
|
|
|
if (after_where != map->root)
|
|
|
|
vm_map_entry_splay(after_where->start, map->root);
|
|
|
|
entry->right = after_where->right;
|
|
|
|
entry->left = after_where;
|
|
|
|
after_where->right = NULL;
|
|
|
|
} else {
|
|
|
|
entry->right = map->root;
|
|
|
|
entry->left = NULL;
|
|
|
|
}
|
|
|
|
map->root = entry;
|
1999-03-21 23:37:00 +00:00
|
|
|
}
|
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
static void
|
1999-03-21 23:37:00 +00:00
|
|
|
vm_map_entry_unlink(vm_map_t map,
|
|
|
|
vm_map_entry_t entry)
|
|
|
|
{
|
2002-05-24 01:33:24 +00:00
|
|
|
vm_map_entry_t next, prev, root;
|
1999-03-21 23:37:00 +00:00
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
if (entry != map->root)
|
|
|
|
vm_map_entry_splay(entry->start, map->root);
|
|
|
|
if (entry->left == NULL)
|
|
|
|
root = entry->right;
|
|
|
|
else {
|
|
|
|
root = vm_map_entry_splay(entry->start, entry->left);
|
|
|
|
root->right = entry->right;
|
|
|
|
}
|
|
|
|
map->root = root;
|
|
|
|
|
|
|
|
prev = entry->prev;
|
|
|
|
next = entry->next;
|
1999-03-21 23:37:00 +00:00
|
|
|
next->prev = prev;
|
|
|
|
prev->next = next;
|
|
|
|
map->nentries--;
|
2001-05-23 22:38:00 +00:00
|
|
|
CTR3(KTR_VM, "vm_map_entry_unlink: map %p, nentries %d, entry %p", map,
|
|
|
|
map->nentries, entry);
|
1999-03-21 23:37:00 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1996-03-28 04:53:28 +00:00
|
|
|
/*
|
|
|
|
* vm_map_lookup_entry: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Finds the map entry containing (or
|
|
|
|
* immediately preceding) the specified address
|
|
|
|
* in the given map; the entry is returned
|
|
|
|
* in the "entry" parameter. The boolean
|
|
|
|
* result indicates whether the address is
|
|
|
|
* actually contained in the map.
|
|
|
|
*/
|
|
|
|
boolean_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_lookup_entry(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t address,
|
|
|
|
vm_map_entry_t *entry) /* OUT */
|
1996-03-28 04:53:28 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t cur;
|
1996-03-28 04:53:28 +00:00
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
cur = vm_map_entry_splay(address, map->root);
|
|
|
|
if (cur == NULL)
|
|
|
|
*entry = &map->header;
|
|
|
|
else {
|
|
|
|
map->root = cur;
|
1996-03-28 04:53:28 +00:00
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
if (address >= cur->start) {
|
1996-03-28 04:53:28 +00:00
|
|
|
*entry = cur;
|
2002-05-24 01:33:24 +00:00
|
|
|
if (cur->end > address)
|
1996-03-28 04:53:28 +00:00
|
|
|
return (TRUE);
|
2002-05-24 01:33:24 +00:00
|
|
|
} else
|
|
|
|
*entry = cur->prev;
|
1996-03-28 04:53:28 +00:00
|
|
|
}
|
|
|
|
return (FALSE);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_insert:
|
|
|
|
*
|
|
|
|
* Inserts the given whole VM object into the target
|
|
|
|
* map at the specified address range. The object's
|
|
|
|
* size should match that of the address range.
|
|
|
|
*
|
|
|
|
* Requires that the map be locked, and leaves it so.
|
1999-02-12 09:51:43 +00:00
|
|
|
*
|
|
|
|
* If object is non-NULL, ref count must be bumped by caller
|
|
|
|
* prior to making call to account for the new entry.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_insert(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
|
|
|
|
vm_offset_t start, vm_offset_t end, vm_prot_t prot, vm_prot_t max,
|
|
|
|
int cow)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t new_entry;
|
|
|
|
vm_map_entry_t prev_entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t temp_entry;
|
2000-02-28 04:10:35 +00:00
|
|
|
vm_eflags_t protoeflags;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Check that the start and end points are not bogus.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if ((start < map->min_offset) || (end > map->max_offset) ||
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(start >= end))
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Find the entry prior to the proposed starting address; if it's part
|
|
|
|
* of an existing entry, this range is bogus.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (vm_map_lookup_entry(map, start, &temp_entry))
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
prev_entry = temp_entry;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Assert that the next entry doesn't overlap the end point.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if ((prev_entry->next != &map->header) &&
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(prev_entry->next->start < end))
|
|
|
|
return (KERN_NO_SPACE);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1997-01-16 04:16:22 +00:00
|
|
|
protoeflags = 0;
|
|
|
|
|
|
|
|
if (cow & MAP_COPY_ON_WRITE)
|
1999-05-14 23:09:34 +00:00
|
|
|
protoeflags |= MAP_ENTRY_COW|MAP_ENTRY_NEEDS_COPY;
|
1997-01-16 04:16:22 +00:00
|
|
|
|
1999-05-18 05:38:48 +00:00
|
|
|
if (cow & MAP_NOFAULT) {
|
1997-01-16 04:16:22 +00:00
|
|
|
protoeflags |= MAP_ENTRY_NOFAULT;
|
|
|
|
|
1999-05-18 05:38:48 +00:00
|
|
|
KASSERT(object == NULL,
|
|
|
|
("vm_map_insert: paradoxical MAP_NOFAULT request"));
|
|
|
|
}
|
1999-12-12 03:19:33 +00:00
|
|
|
if (cow & MAP_DISABLE_SYNCER)
|
|
|
|
protoeflags |= MAP_ENTRY_NOSYNC;
|
2000-02-28 04:10:35 +00:00
|
|
|
if (cow & MAP_DISABLE_COREDUMP)
|
|
|
|
protoeflags |= MAP_ENTRY_NOCOREDUMP;
|
1999-12-12 03:19:33 +00:00
|
|
|
|
1999-02-12 09:51:43 +00:00
|
|
|
if (object) {
|
|
|
|
/*
|
|
|
|
* When object is non-NULL, it could be shared with another
|
|
|
|
* process. We have to set or clear OBJ_ONEMAPPING
|
|
|
|
* appropriately.
|
|
|
|
*/
|
|
|
|
if ((object->ref_count > 1) || (object->shadow_count != 0)) {
|
|
|
|
vm_object_clear_flag(object, OBJ_ONEMAPPING);
|
|
|
|
}
|
1999-05-18 05:38:48 +00:00
|
|
|
}
|
|
|
|
else if ((prev_entry != &map->header) &&
|
|
|
|
(prev_entry->eflags == protoeflags) &&
|
|
|
|
(prev_entry->end == start) &&
|
|
|
|
(prev_entry->wired_count == 0) &&
|
|
|
|
((prev_entry->object.vm_object == NULL) ||
|
|
|
|
vm_object_coalesce(prev_entry->object.vm_object,
|
|
|
|
OFF_TO_IDX(prev_entry->offset),
|
|
|
|
(vm_size_t)(prev_entry->end - prev_entry->start),
|
|
|
|
(vm_size_t)(end - prev_entry->end)))) {
|
|
|
|
/*
|
|
|
|
* We were able to extend the object. Determine if we
|
|
|
|
* can extend the previous map entry to include the
|
|
|
|
* new range as well.
|
|
|
|
*/
|
|
|
|
if ((prev_entry->inheritance == VM_INHERIT_DEFAULT) &&
|
|
|
|
(prev_entry->protection == prot) &&
|
|
|
|
(prev_entry->max_protection == max)) {
|
|
|
|
map->size += (end - prev_entry->end);
|
|
|
|
prev_entry->end = end;
|
2001-02-04 06:19:28 +00:00
|
|
|
vm_map_simplify_entry(map, prev_entry);
|
1999-05-18 05:38:48 +00:00
|
|
|
return (KERN_SUCCESS);
|
1996-05-18 03:38:05 +00:00
|
|
|
}
|
1999-05-18 05:38:48 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can extend the object but cannot extend the
|
|
|
|
* map entry, we have to create a new map entry. We
|
|
|
|
* must bump the ref count on the extended object to
|
2001-02-04 06:19:28 +00:00
|
|
|
* account for it. object may be NULL.
|
1999-05-18 05:38:48 +00:00
|
|
|
*/
|
|
|
|
object = prev_entry->object.vm_object;
|
|
|
|
offset = prev_entry->offset +
|
|
|
|
(prev_entry->end - prev_entry->start);
|
|
|
|
vm_object_reference(object);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-12-31 16:23:38 +00:00
|
|
|
|
1999-02-12 09:51:43 +00:00
|
|
|
/*
|
|
|
|
* NOTE: if conditionals fail, object can be NULL here. This occurs
|
|
|
|
* in things like the buffer map where we manage kva but do not manage
|
|
|
|
* backing objects.
|
|
|
|
*/
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Create a new entry
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(map);
|
|
|
|
new_entry->start = start;
|
|
|
|
new_entry->end = end;
|
|
|
|
|
1997-01-16 04:16:22 +00:00
|
|
|
new_entry->eflags = protoeflags;
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry->object.vm_object = object;
|
|
|
|
new_entry->offset = offset;
|
1999-01-06 23:05:42 +00:00
|
|
|
new_entry->avail_ssize = 0;
|
|
|
|
|
1999-03-02 05:43:18 +00:00
|
|
|
new_entry->inheritance = VM_INHERIT_DEFAULT;
|
|
|
|
new_entry->protection = prot;
|
|
|
|
new_entry->max_protection = max;
|
|
|
|
new_entry->wired_count = 0;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Insert the new entry into the list
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_entry_link(map, prev_entry, new_entry);
|
|
|
|
map->size += new_entry->end - new_entry->start;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Update the free space hint
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-07-30 03:08:57 +00:00
|
|
|
if ((map->first_free == prev_entry) &&
|
1999-12-12 03:19:33 +00:00
|
|
|
(prev_entry->end >= new_entry->start)) {
|
1996-07-30 03:08:57 +00:00
|
|
|
map->first_free = new_entry;
|
1999-12-12 03:19:33 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-03-14 06:09:42 +00:00
|
|
|
#if 0
|
|
|
|
/*
|
|
|
|
* Temporarily removed to avoid MAP_STACK panic, due to
|
|
|
|
* MAP_STACK being a huge hack. Will be added back in
|
|
|
|
* when MAP_STACK (and the user stack mapping) is fixed.
|
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
/*
|
|
|
|
* It may be possible to simplify the entry
|
|
|
|
*/
|
|
|
|
vm_map_simplify_entry(map, new_entry);
|
2001-03-14 06:09:42 +00:00
|
|
|
#endif
|
2001-02-04 06:19:28 +00:00
|
|
|
|
1999-12-12 03:19:33 +00:00
|
|
|
if (cow & (MAP_PREFAULT|MAP_PREFAULT_PARTIAL)) {
|
1999-05-17 00:53:56 +00:00
|
|
|
pmap_object_init_pt(map->pmap, start,
|
|
|
|
object, OFF_TO_IDX(offset), end - start,
|
|
|
|
cow & MAP_PREFAULT_PARTIAL);
|
1999-12-12 03:19:33 +00:00
|
|
|
}
|
1999-05-17 00:53:56 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find sufficient space for `length' bytes in the given map, starting at
|
|
|
|
* `start'. The map must be locked. Returns 0 on success, 1 on no space.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_findspace(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_size_t length,
|
|
|
|
vm_offset_t *addr)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry, next;
|
|
|
|
vm_offset_t end;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
1994-05-24 10:09:53 +00:00
|
|
|
if (start < map->min_offset)
|
|
|
|
start = map->min_offset;
|
|
|
|
if (start > map->max_offset)
|
|
|
|
return (1);
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Look for the first possible address; if there's already something
|
|
|
|
* at this address, we have to start after it.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (start == map->min_offset) {
|
1996-07-30 03:08:57 +00:00
|
|
|
if ((entry = map->first_free) != &map->header)
|
1994-05-24 10:09:53 +00:00
|
|
|
start = entry->end;
|
|
|
|
} else {
|
|
|
|
vm_map_entry_t tmp;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
if (vm_map_lookup_entry(map, start, &tmp))
|
|
|
|
start = tmp->end;
|
|
|
|
entry = tmp;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Look through the rest of the map, trying to fit a new region in the
|
|
|
|
* gap between existing regions, or after the very last region.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
for (;; start = (entry = next)->end) {
|
|
|
|
/*
|
|
|
|
* Find the end of the proposed new region. Be sure we didn't
|
|
|
|
* go beyond the end of the map, or wrap around the address;
|
|
|
|
* if so, we lose. Otherwise, if this is the last entry, or
|
|
|
|
* if the proposed new region fits before the next entry, we
|
|
|
|
* win.
|
|
|
|
*/
|
|
|
|
end = start + length;
|
|
|
|
if (end > map->max_offset || end < start)
|
|
|
|
return (1);
|
|
|
|
next = entry->next;
|
|
|
|
if (next == &map->header || next->start >= end)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
*addr = start;
|
1997-09-21 04:24:27 +00:00
|
|
|
if (map == kernel_map) {
|
|
|
|
vm_offset_t ksize;
|
|
|
|
if ((ksize = round_page(start + length)) > kernel_vm_end) {
|
|
|
|
pmap_growkernel(ksize);
|
|
|
|
}
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_find finds an unallocated region in the target address
|
|
|
|
* map with the given length. The search is defined to be
|
|
|
|
* first-fit from the specified address; the region found is
|
|
|
|
* returned in the same parameter.
|
|
|
|
*
|
1999-02-12 09:51:43 +00:00
|
|
|
* If object is non-NULL, ref count must be bumped by caller
|
|
|
|
* prior to making call to account for the new entry.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_find(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
|
|
|
|
vm_offset_t *addr, /* IN/OUT */
|
|
|
|
vm_size_t length, boolean_t find_space, vm_prot_t prot,
|
|
|
|
vm_prot_t max, int cow)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_offset_t start;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
int result, s = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
start = *addr;
|
1994-12-15 22:47:11 +00:00
|
|
|
|
2001-06-22 06:35:32 +00:00
|
|
|
if (map == kmem_map)
|
1996-05-18 03:38:05 +00:00
|
|
|
s = splvm();
|
1994-12-15 22:47:11 +00:00
|
|
|
|
1995-11-12 08:58:58 +00:00
|
|
|
vm_map_lock(map);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (find_space) {
|
|
|
|
if (vm_map_findspace(map, start, length, addr)) {
|
|
|
|
vm_map_unlock(map);
|
2001-06-22 06:35:32 +00:00
|
|
|
if (map == kmem_map)
|
1994-12-15 22:47:11 +00:00
|
|
|
splx(s);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
}
|
|
|
|
start = *addr;
|
|
|
|
}
|
1996-01-19 04:00:31 +00:00
|
|
|
result = vm_map_insert(map, object, offset,
|
|
|
|
start, start + length, prot, max, cow);
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock(map);
|
1994-12-15 22:47:11 +00:00
|
|
|
|
2001-06-22 06:35:32 +00:00
|
|
|
if (map == kmem_map)
|
1994-12-15 22:47:11 +00:00
|
|
|
splx(s);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
return (result);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
1996-12-28 23:07:49 +00:00
|
|
|
* vm_map_simplify_entry:
|
1996-07-30 03:08:57 +00:00
|
|
|
*
|
2001-02-04 06:19:28 +00:00
|
|
|
* Simplify the given map entry by merging with either neighbor. This
|
|
|
|
* routine also has the ability to merge with both neighbors.
|
|
|
|
*
|
|
|
|
* The map must be locked.
|
|
|
|
*
|
|
|
|
* This routine guarentees that the passed entry remains valid (though
|
|
|
|
* possibly extended). When merging, this routine may delete one or
|
|
|
|
* both neighbors.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-12-28 23:07:49 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_simplify_entry(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_t next, prev;
|
1996-12-28 23:07:49 +00:00
|
|
|
vm_size_t prevsize, esize;
|
1996-07-30 03:08:57 +00:00
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_IS_SUB_MAP)
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
|
|
|
|
1996-03-13 01:18:14 +00:00
|
|
|
prev = entry->prev;
|
|
|
|
if (prev != &map->header) {
|
1996-07-30 03:08:57 +00:00
|
|
|
prevsize = prev->end - prev->start;
|
|
|
|
if ( (prev->end == entry->start) &&
|
|
|
|
(prev->object.vm_object == entry->object.vm_object) &&
|
|
|
|
(!prev->object.vm_object ||
|
|
|
|
(prev->offset + prevsize == entry->offset)) &&
|
1997-01-16 04:16:22 +00:00
|
|
|
(prev->eflags == entry->eflags) &&
|
1996-07-30 03:08:57 +00:00
|
|
|
(prev->protection == entry->protection) &&
|
|
|
|
(prev->max_protection == entry->max_protection) &&
|
|
|
|
(prev->inheritance == entry->inheritance) &&
|
1996-12-28 23:07:49 +00:00
|
|
|
(prev->wired_count == entry->wired_count)) {
|
1996-03-13 01:18:14 +00:00
|
|
|
if (map->first_free == prev)
|
|
|
|
map->first_free = entry;
|
|
|
|
vm_map_entry_unlink(map, prev);
|
|
|
|
entry->start = prev->start;
|
|
|
|
entry->offset = prev->offset;
|
1996-05-18 03:38:05 +00:00
|
|
|
if (prev->object.vm_object)
|
|
|
|
vm_object_deallocate(prev->object.vm_object);
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_dispose(map, prev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
next = entry->next;
|
|
|
|
if (next != &map->header) {
|
1996-07-30 03:08:57 +00:00
|
|
|
esize = entry->end - entry->start;
|
|
|
|
if ((entry->end == next->start) &&
|
|
|
|
(next->object.vm_object == entry->object.vm_object) &&
|
|
|
|
(!entry->object.vm_object ||
|
|
|
|
(entry->offset + esize == next->offset)) &&
|
1997-01-16 04:16:22 +00:00
|
|
|
(next->eflags == entry->eflags) &&
|
1996-07-30 03:08:57 +00:00
|
|
|
(next->protection == entry->protection) &&
|
|
|
|
(next->max_protection == entry->max_protection) &&
|
|
|
|
(next->inheritance == entry->inheritance) &&
|
1996-12-28 23:07:49 +00:00
|
|
|
(next->wired_count == entry->wired_count)) {
|
1996-03-13 01:18:14 +00:00
|
|
|
if (map->first_free == next)
|
|
|
|
map->first_free = entry;
|
|
|
|
vm_map_entry_unlink(map, next);
|
|
|
|
entry->end = next->end;
|
1996-05-18 03:38:05 +00:00
|
|
|
if (next->object.vm_object)
|
|
|
|
vm_object_deallocate(next->object.vm_object);
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_dispose(map, next);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* vm_map_clip_start: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Asserts that the given entry begins at or after
|
|
|
|
* the specified address; if necessary,
|
|
|
|
* it splits the entry into two.
|
|
|
|
*/
|
|
|
|
#define vm_map_clip_start(map, entry, startaddr) \
|
|
|
|
{ \
|
|
|
|
if (startaddr > entry->start) \
|
|
|
|
_vm_map_clip_start(map, entry, startaddr); \
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This routine is called only when it is known that
|
|
|
|
* the entry must be split.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
_vm_map_clip_start(vm_map_t map, vm_map_entry_t entry, vm_offset_t start)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t new_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Split off the front portion -- note that we must insert the new
|
|
|
|
* entry BEFORE this one, so that this entry has the specified
|
|
|
|
* starting address.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-03-28 04:22:17 +00:00
|
|
|
vm_map_simplify_entry(map, entry);
|
|
|
|
|
1997-07-27 04:44:12 +00:00
|
|
|
/*
|
|
|
|
* If there is no object backing this entry, we might as well create
|
|
|
|
* one now. If we defer it, an object can get created after the map
|
|
|
|
* is clipped, and individual objects will be created for the split-up
|
|
|
|
* map. This is a bit of a hack, but is also about the best place to
|
|
|
|
* put this improvement.
|
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
if (entry->object.vm_object == NULL && !map->system_map) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_t object;
|
|
|
|
object = vm_object_allocate(OBJT_DEFAULT,
|
|
|
|
atop(entry->end - entry->start));
|
|
|
|
entry->object.vm_object = object;
|
|
|
|
entry->offset = 0;
|
1997-07-27 04:44:12 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry = vm_map_entry_create(map);
|
|
|
|
*new_entry = *entry;
|
|
|
|
|
|
|
|
new_entry->end = start;
|
|
|
|
entry->offset += (start - entry->start);
|
|
|
|
entry->start = start;
|
|
|
|
|
|
|
|
vm_map_entry_link(map, entry->prev, new_entry);
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_object_reference(new_entry->object.vm_object);
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_clip_end: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Asserts that the given entry ends at or before
|
|
|
|
* the specified address; if necessary,
|
|
|
|
* it splits the entry into two.
|
|
|
|
*/
|
|
|
|
#define vm_map_clip_end(map, entry, endaddr) \
|
|
|
|
{ \
|
|
|
|
if (endaddr < entry->end) \
|
|
|
|
_vm_map_clip_end(map, entry, endaddr); \
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This routine is called only when it is known that
|
|
|
|
* the entry must be split.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
_vm_map_clip_end(vm_map_t map, vm_map_entry_t entry, vm_offset_t end)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t new_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1997-07-27 04:44:12 +00:00
|
|
|
/*
|
|
|
|
* If there is no object backing this entry, we might as well create
|
|
|
|
* one now. If we defer it, an object can get created after the map
|
|
|
|
* is clipped, and individual objects will be created for the split-up
|
|
|
|
* map. This is a bit of a hack, but is also about the best place to
|
|
|
|
* put this improvement.
|
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
if (entry->object.vm_object == NULL && !map->system_map) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_t object;
|
|
|
|
object = vm_object_allocate(OBJT_DEFAULT,
|
|
|
|
atop(entry->end - entry->start));
|
|
|
|
entry->object.vm_object = object;
|
|
|
|
entry->offset = 0;
|
1997-07-27 04:44:12 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Create a new entry and insert it AFTER the specified entry
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(map);
|
|
|
|
*new_entry = *entry;
|
|
|
|
|
|
|
|
new_entry->start = entry->end = end;
|
|
|
|
new_entry->offset += (end - entry->start);
|
|
|
|
|
|
|
|
vm_map_entry_link(map, entry, new_entry);
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_object_reference(new_entry->object.vm_object);
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* VM_MAP_RANGE_CHECK: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Asserts that the starting and ending region
|
|
|
|
* addresses fall within the valid range of the map.
|
|
|
|
*/
|
|
|
|
#define VM_MAP_RANGE_CHECK(map, start, end) \
|
|
|
|
{ \
|
|
|
|
if (start < vm_map_min(map)) \
|
|
|
|
start = vm_map_min(map); \
|
|
|
|
if (end > vm_map_max(map)) \
|
|
|
|
end = vm_map_max(map); \
|
|
|
|
if (start > end) \
|
|
|
|
start = end; \
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_submap: [ kernel use only ]
|
|
|
|
*
|
|
|
|
* Mark the given range as handled by a subordinate map.
|
|
|
|
*
|
|
|
|
* This range must have been created with vm_map_find,
|
|
|
|
* and no other operations may have been performed on this
|
|
|
|
* range prior to calling vm_map_submap.
|
|
|
|
*
|
|
|
|
* Only a limited number of operations can be performed
|
|
|
|
* within this rage after calling vm_map_submap:
|
|
|
|
* vm_fault
|
|
|
|
* [Don't try vm_map_copy!]
|
|
|
|
*
|
|
|
|
* To remove a submapping, one must first remove the
|
|
|
|
* range from the superior map, and then destroy the
|
|
|
|
* submap (if desired). [Better yet, don't try it.]
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_submap(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
vm_map_t submap)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t entry;
|
1998-04-29 04:28:22 +00:00
|
|
|
int result = KERN_INVALID_ARGUMENT;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &entry)) {
|
|
|
|
vm_map_clip_start(map, entry, start);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = entry->next;
|
|
|
|
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
|
|
|
|
if ((entry->start == start) && (entry->end == end) &&
|
1999-02-07 21:48:23 +00:00
|
|
|
((entry->eflags & MAP_ENTRY_COW) == 0) &&
|
1997-01-16 04:16:22 +00:00
|
|
|
(entry->object.vm_object == NULL)) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
entry->object.sub_map = submap;
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags |= MAP_ENTRY_IS_SUB_MAP;
|
1994-05-24 10:09:53 +00:00
|
|
|
result = KERN_SUCCESS;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (result);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_protect:
|
|
|
|
*
|
|
|
|
* Sets the protection of the specified address
|
|
|
|
* region in the target map. If "set_max" is
|
|
|
|
* specified, the maximum protection is to be set;
|
|
|
|
* otherwise, only the current protection is affected.
|
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
|
|
|
vm_prot_t new_prot, boolean_t set_max)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t current;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &entry)) {
|
|
|
|
vm_map_clip_start(map, entry, start);
|
1996-12-28 23:07:49 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = entry->next;
|
1996-12-28 23:07:49 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make a first pass to check for protection violations.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
current = entry;
|
|
|
|
while ((current != &map->header) && (current->start < end)) {
|
1997-01-16 04:16:22 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
1995-02-02 09:09:15 +00:00
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1995-02-02 09:09:15 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
if ((new_prot & current->max_protection) != new_prot) {
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_PROTECTION_FAILURE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
current = current->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Go back and fix up protections. [Note that clipping is not
|
|
|
|
* necessary the second time.]
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
current = entry;
|
|
|
|
while ((current != &map->header) && (current->start < end)) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_prot_t old_prot;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_clip_end(map, current, end);
|
|
|
|
|
|
|
|
old_prot = current->protection;
|
|
|
|
if (set_max)
|
|
|
|
current->protection =
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(current->max_protection = new_prot) &
|
|
|
|
old_prot;
|
1994-05-24 10:09:53 +00:00
|
|
|
else
|
|
|
|
current->protection = new_prot;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Update physical map if necessary. Worry about copy-on-write
|
|
|
|
* here -- CHECK THIS XXX
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (current->protection != old_prot) {
|
2002-05-12 05:22:56 +00:00
|
|
|
mtx_lock(&Giant);
|
1997-01-16 04:16:22 +00:00
|
|
|
#define MASK(entry) (((entry)->eflags & MAP_ENTRY_COW) ? ~VM_PROT_WRITE : \
|
1994-05-24 10:09:53 +00:00
|
|
|
VM_PROT_ALL)
|
1999-02-07 21:48:23 +00:00
|
|
|
pmap_protect(map->pmap, current->start,
|
|
|
|
current->end,
|
1999-06-12 23:10:38 +00:00
|
|
|
current->protection & MASK(current));
|
1994-05-24 10:09:53 +00:00
|
|
|
#undef MASK
|
2002-05-12 05:22:56 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1997-04-06 03:04:31 +00:00
|
|
|
vm_map_simplify_entry(map, current);
|
1994-05-24 10:09:53 +00:00
|
|
|
current = current->next;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1996-05-19 07:36:50 +00:00
|
|
|
/*
|
|
|
|
* vm_map_madvise:
|
|
|
|
*
|
|
|
|
* This routine traverses a processes map handling the madvise
|
1999-08-13 17:45:34 +00:00
|
|
|
* system call. Advisories are classified as either those effecting
|
|
|
|
* the vm_map_entry structure, or those effecting the underlying
|
|
|
|
* objects.
|
1996-05-19 07:36:50 +00:00
|
|
|
*/
|
1999-09-21 05:00:48 +00:00
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_madvise(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
int behav)
|
1996-05-19 07:36:50 +00:00
|
|
|
{
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_entry_t current, entry;
|
1999-09-21 05:00:48 +00:00
|
|
|
int modify_map = 0;
|
1996-05-19 07:36:50 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
/*
|
|
|
|
* Some madvise calls directly modify the vm_map_entry, in which case
|
|
|
|
* we need to use an exclusive lock on the map and we need to perform
|
|
|
|
* various clipping operations. Otherwise we only need a read-lock
|
|
|
|
* on the map.
|
|
|
|
*/
|
|
|
|
switch(behav) {
|
|
|
|
case MADV_NORMAL:
|
|
|
|
case MADV_SEQUENTIAL:
|
|
|
|
case MADV_RANDOM:
|
1999-12-12 03:19:33 +00:00
|
|
|
case MADV_NOSYNC:
|
|
|
|
case MADV_AUTOSYNC:
|
2000-02-28 04:10:35 +00:00
|
|
|
case MADV_NOCORE:
|
|
|
|
case MADV_CORE:
|
1999-09-21 05:00:48 +00:00
|
|
|
modify_map = 1;
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_lock(map);
|
1999-09-21 05:00:48 +00:00
|
|
|
break;
|
|
|
|
case MADV_WILLNEED:
|
|
|
|
case MADV_DONTNEED:
|
|
|
|
case MADV_FREE:
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_lock_read(map);
|
1999-09-21 05:00:48 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return (KERN_INVALID_ARGUMENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Locate starting entry and clip if necessary.
|
|
|
|
*/
|
1996-05-19 07:36:50 +00:00
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &entry)) {
|
1999-08-13 17:45:34 +00:00
|
|
|
if (modify_map)
|
|
|
|
vm_map_clip_start(map, entry, start);
|
1999-09-21 05:00:48 +00:00
|
|
|
} else {
|
1996-05-19 07:36:50 +00:00
|
|
|
entry = entry->next;
|
1999-09-21 05:00:48 +00:00
|
|
|
}
|
1996-05-19 07:36:50 +00:00
|
|
|
|
1999-08-13 17:45:34 +00:00
|
|
|
if (modify_map) {
|
|
|
|
/*
|
|
|
|
* madvise behaviors that are implemented in the vm_map_entry.
|
|
|
|
*
|
|
|
|
* We clip the vm_map_entry so that behavioral changes are
|
|
|
|
* limited to the specified address range.
|
|
|
|
*/
|
|
|
|
for (current = entry;
|
|
|
|
(current != &map->header) && (current->start < end);
|
1999-09-21 05:00:48 +00:00
|
|
|
current = current->next
|
|
|
|
) {
|
1999-08-13 17:45:34 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
vm_map_clip_end(map, current, end);
|
|
|
|
|
|
|
|
switch (behav) {
|
|
|
|
case MADV_NORMAL:
|
|
|
|
vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_NORMAL);
|
|
|
|
break;
|
|
|
|
case MADV_SEQUENTIAL:
|
|
|
|
vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_SEQUENTIAL);
|
|
|
|
break;
|
|
|
|
case MADV_RANDOM:
|
|
|
|
vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_RANDOM);
|
|
|
|
break;
|
1999-12-12 03:19:33 +00:00
|
|
|
case MADV_NOSYNC:
|
|
|
|
current->eflags |= MAP_ENTRY_NOSYNC;
|
|
|
|
break;
|
|
|
|
case MADV_AUTOSYNC:
|
|
|
|
current->eflags &= ~MAP_ENTRY_NOSYNC;
|
|
|
|
break;
|
2000-02-28 04:10:35 +00:00
|
|
|
case MADV_NOCORE:
|
|
|
|
current->eflags |= MAP_ENTRY_NOCOREDUMP;
|
|
|
|
break;
|
|
|
|
case MADV_CORE:
|
|
|
|
current->eflags &= ~MAP_ENTRY_NOCOREDUMP;
|
|
|
|
break;
|
1999-08-13 17:45:34 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
vm_map_simplify_entry(map, current);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_unlock(map);
|
1999-09-21 05:00:48 +00:00
|
|
|
} else {
|
|
|
|
vm_pindex_t pindex;
|
|
|
|
int count;
|
1999-08-13 17:45:34 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
/*
|
|
|
|
* madvise behaviors that are implemented in the underlying
|
|
|
|
* vm_object.
|
|
|
|
*
|
|
|
|
* Since we don't clip the vm_map_entry, we have to clip
|
|
|
|
* the vm_object pindex and count.
|
|
|
|
*/
|
|
|
|
for (current = entry;
|
|
|
|
(current != &map->header) && (current->start < end);
|
|
|
|
current = current->next
|
|
|
|
) {
|
2000-05-14 18:46:40 +00:00
|
|
|
vm_offset_t useStart;
|
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP)
|
|
|
|
continue;
|
1999-08-13 17:45:34 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
pindex = OFF_TO_IDX(current->offset);
|
|
|
|
count = atop(current->end - current->start);
|
2000-05-14 18:46:40 +00:00
|
|
|
useStart = current->start;
|
1997-01-22 01:34:48 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
if (current->start < start) {
|
|
|
|
pindex += atop(start - current->start);
|
|
|
|
count -= atop(start - current->start);
|
2000-05-14 18:46:40 +00:00
|
|
|
useStart = start;
|
1999-09-21 05:00:48 +00:00
|
|
|
}
|
|
|
|
if (current->end > end)
|
|
|
|
count -= atop(current->end - end);
|
1999-08-13 17:45:34 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
if (count <= 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
vm_object_madvise(current->object.vm_object,
|
|
|
|
pindex, count, behav);
|
|
|
|
if (behav == MADV_WILLNEED) {
|
2002-05-18 07:48:06 +00:00
|
|
|
mtx_lock(&Giant);
|
1999-09-21 05:00:48 +00:00
|
|
|
pmap_object_init_pt(
|
|
|
|
map->pmap,
|
2000-05-14 18:46:40 +00:00
|
|
|
useStart,
|
1999-09-21 05:00:48 +00:00
|
|
|
current->object.vm_object,
|
|
|
|
pindex,
|
|
|
|
(count << PAGE_SHIFT),
|
2001-10-31 03:06:33 +00:00
|
|
|
MAP_PREFAULT_MADVISE
|
1999-09-21 05:00:48 +00:00
|
|
|
);
|
2002-05-18 07:48:06 +00:00
|
|
|
mtx_unlock(&Giant);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
|
|
|
}
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_unlock_read(map);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_inherit:
|
|
|
|
*
|
|
|
|
* Sets the inheritance of the specified address
|
|
|
|
* range in the target map. Inheritance
|
|
|
|
* affects how the map will be shared with
|
|
|
|
* child maps at the time of vm_map_fork.
|
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_inherit(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
|
|
|
vm_inherit_t new_inheritance)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t temp_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
switch (new_inheritance) {
|
|
|
|
case VM_INHERIT_NONE:
|
|
|
|
case VM_INHERIT_COPY:
|
|
|
|
case VM_INHERIT_SHARE:
|
|
|
|
break;
|
|
|
|
default:
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
if (vm_map_lookup_entry(map, start, &temp_entry)) {
|
|
|
|
entry = temp_entry;
|
|
|
|
vm_map_clip_start(map, entry, start);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = temp_entry->next;
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
entry->inheritance = new_inheritance;
|
1999-03-15 06:24:52 +00:00
|
|
|
vm_map_simplify_entry(map, entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = entry->next;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1996-12-14 17:54:17 +00:00
|
|
|
/*
|
|
|
|
* Implement the semantics of mlock
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_user_pageable(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
boolean_t new_pageable)
|
1996-12-14 17:54:17 +00:00
|
|
|
{
|
1997-11-14 23:42:10 +00:00
|
|
|
vm_map_entry_t entry;
|
1996-12-14 17:54:17 +00:00
|
|
|
vm_map_entry_t start_entry;
|
1997-11-14 23:42:10 +00:00
|
|
|
vm_offset_t estart;
|
2001-10-14 20:47:08 +00:00
|
|
|
vm_offset_t eend;
|
1996-12-14 17:54:17 +00:00
|
|
|
int rv;
|
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &start_entry) == FALSE) {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new_pageable) {
|
|
|
|
|
|
|
|
entry = start_entry;
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now decrement the wiring count for each region. If a region
|
|
|
|
* becomes completely unwired, unwire its physical pages and
|
|
|
|
* mappings.
|
|
|
|
*/
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_USER_WIRED) {
|
1996-12-14 17:54:17 +00:00
|
|
|
vm_map_clip_end(map, entry, end);
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_USER_WIRED;
|
1996-12-14 17:54:17 +00:00
|
|
|
entry->wired_count--;
|
|
|
|
if (entry->wired_count == 0)
|
|
|
|
vm_fault_unwire(map, entry->start, entry->end);
|
|
|
|
}
|
1997-11-14 23:42:10 +00:00
|
|
|
vm_map_simplify_entry(map,entry);
|
1996-12-14 17:54:17 +00:00
|
|
|
entry = entry->next;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
|
|
|
|
entry = start_entry;
|
|
|
|
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
|
|
|
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_USER_WIRED) {
|
1996-12-14 17:54:17 +00:00
|
|
|
entry = entry->next;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (entry->wired_count != 0) {
|
|
|
|
entry->wired_count++;
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags |= MAP_ENTRY_USER_WIRED;
|
1996-12-14 17:54:17 +00:00
|
|
|
entry = entry->next;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Here on entry being newly wired */
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
1997-01-16 04:16:22 +00:00
|
|
|
int copyflag = entry->eflags & MAP_ENTRY_NEEDS_COPY;
|
1996-12-14 17:54:17 +00:00
|
|
|
if (copyflag && ((entry->protection & VM_PROT_WRITE) != 0)) {
|
|
|
|
|
|
|
|
vm_object_shadow(&entry->object.vm_object,
|
|
|
|
&entry->offset,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(entry->end - entry->start));
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_NEEDS_COPY;
|
1996-12-14 17:54:17 +00:00
|
|
|
|
2001-02-04 06:19:28 +00:00
|
|
|
} else if (entry->object.vm_object == NULL &&
|
|
|
|
!map->system_map) {
|
1996-12-14 17:54:17 +00:00
|
|
|
|
|
|
|
entry->object.vm_object =
|
|
|
|
vm_object_allocate(OBJT_DEFAULT,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(entry->end - entry->start));
|
1996-12-14 17:54:17 +00:00
|
|
|
entry->offset = (vm_offset_t) 0;
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
|
|
|
|
entry->wired_count++;
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags |= MAP_ENTRY_USER_WIRED;
|
1997-11-14 23:42:10 +00:00
|
|
|
estart = entry->start;
|
2001-10-14 20:47:08 +00:00
|
|
|
eend = entry->end;
|
1996-12-14 17:54:17 +00:00
|
|
|
|
|
|
|
/* First we need to allow map modifications */
|
2002-03-18 15:08:09 +00:00
|
|
|
vm_map_set_recursive(map);
|
|
|
|
vm_map_lock_downgrade(map);
|
1998-01-17 09:17:02 +00:00
|
|
|
map->timestamp++;
|
1996-12-14 17:54:17 +00:00
|
|
|
|
|
|
|
rv = vm_fault_user_wire(map, entry->start, entry->end);
|
|
|
|
if (rv) {
|
2002-03-18 15:08:09 +00:00
|
|
|
|
1996-12-14 17:54:17 +00:00
|
|
|
entry->wired_count--;
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_USER_WIRED;
|
2002-03-18 15:08:09 +00:00
|
|
|
|
|
|
|
vm_map_clear_recursive(map);
|
1996-12-14 17:54:17 +00:00
|
|
|
vm_map_unlock(map);
|
2001-10-14 20:47:08 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* At this point, the map is unlocked, and
|
|
|
|
* entry might no longer be valid. Use copy
|
|
|
|
* of entry start value obtained while entry
|
|
|
|
* was valid.
|
|
|
|
*/
|
|
|
|
(void) vm_map_user_pageable(map, start, estart,
|
|
|
|
TRUE);
|
1996-12-14 17:54:17 +00:00
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2002-03-18 15:08:09 +00:00
|
|
|
vm_map_clear_recursive(map);
|
|
|
|
if (vm_map_lock_upgrade(map)) {
|
1997-11-14 23:42:10 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
if (vm_map_lookup_entry(map, estart, &entry)
|
|
|
|
== FALSE) {
|
|
|
|
vm_map_unlock(map);
|
2001-10-14 20:47:08 +00:00
|
|
|
/*
|
|
|
|
* vm_fault_user_wire succeded, thus
|
|
|
|
* the area between start and eend
|
|
|
|
* is wired and has to be unwired
|
|
|
|
* here as part of the cleanup.
|
|
|
|
*/
|
1997-11-14 23:42:10 +00:00
|
|
|
(void) vm_map_user_pageable(map,
|
|
|
|
start,
|
2001-10-14 20:47:08 +00:00
|
|
|
eend,
|
1997-11-14 23:42:10 +00:00
|
|
|
TRUE);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
vm_map_simplify_entry(map,entry);
|
1996-12-14 17:54:17 +00:00
|
|
|
}
|
|
|
|
}
|
1998-01-17 09:17:02 +00:00
|
|
|
map->timestamp++;
|
1996-12-14 17:54:17 +00:00
|
|
|
vm_map_unlock(map);
|
|
|
|
return KERN_SUCCESS;
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_pageable:
|
|
|
|
*
|
|
|
|
* Sets the pageability of the specified address
|
|
|
|
* range in the target map. Regions specified
|
|
|
|
* as not pageable require locked-down physical
|
|
|
|
* memory and physical page maps.
|
|
|
|
*
|
|
|
|
* The map must not be locked, but a reference
|
|
|
|
* must remain to the map throughout the call.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_pageable(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
boolean_t new_pageable)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t start_entry;
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_offset_t failed = 0;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
int rv;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Only one pageability change may take place at one time, since
|
|
|
|
* vm_fault assumes it will be called only once for each
|
|
|
|
* wiring/unwiring. Therefore, we have to make sure we're actually
|
|
|
|
* changing the pageability for the entire region. We do so before
|
|
|
|
* making any changes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (vm_map_lookup_entry(map, start, &start_entry) == FALSE) {
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ADDRESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
entry = start_entry;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Actions are rather different for wiring and unwiring, so we have
|
|
|
|
* two separate cases.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (new_pageable) {
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Unwiring. First ensure that the range to be unwired is
|
|
|
|
* really wired down and that there are no holes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
if (entry->wired_count == 0 ||
|
|
|
|
(entry->end < end &&
|
|
|
|
(entry->next == &map->header ||
|
|
|
|
entry->next->start > entry->end))) {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ARGUMENT);
|
|
|
|
}
|
|
|
|
entry = entry->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Now decrement the wiring count for each region. If a region
|
|
|
|
* becomes completely unwired, unwire its physical pages and
|
|
|
|
* mappings.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
entry = start_entry;
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_clip_end(map, entry, end);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->wired_count--;
|
|
|
|
if (entry->wired_count == 0)
|
|
|
|
vm_fault_unwire(map, entry->start, entry->end);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-03-15 06:24:52 +00:00
|
|
|
vm_map_simplify_entry(map, entry);
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry = entry->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Wiring. We must do this in two passes:
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* 1. Holding the write lock, we create any shadow or zero-fill
|
|
|
|
* objects that need to be created. Then we clip each map
|
|
|
|
* entry to the region to be wired and increment its wiring
|
|
|
|
* count. We create objects before clipping the map entries
|
|
|
|
* to avoid object proliferation.
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* 2. We downgrade to a read lock, and call vm_fault_wire to
|
|
|
|
* fault in the pages for any newly wired area (wired_count is
|
|
|
|
* 1).
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Downgrading to a read lock for vm_fault_wire avoids a possible
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
* deadlock with another process that may have faulted on one
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* of the pages to be wired (it would mark the page busy,
|
|
|
|
* blocking us, then in turn block on the map lock that we
|
|
|
|
* hold). Because of problems in the recursive lock package,
|
|
|
|
* we cannot upgrade to a write lock in vm_map_lookup. Thus,
|
|
|
|
* any actions that require the write lock must be done
|
|
|
|
* beforehand. Because we keep the read lock on the map, the
|
|
|
|
* copy-on-write status of the entries we modify here cannot
|
|
|
|
* change.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Pass 1.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
if (entry->wired_count == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
|
|
|
* Perform actions of vm_map_lookup that need
|
|
|
|
* the write lock on the map: create a shadow
|
|
|
|
* object for a copy-on-write region, or an
|
|
|
|
* object for a zero-fill region.
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* We don't have to do this for entries that
|
1999-03-27 23:46:04 +00:00
|
|
|
* point to sub maps, because we won't
|
|
|
|
* hold the lock on the sub map.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
1997-01-16 04:16:22 +00:00
|
|
|
int copyflag = entry->eflags & MAP_ENTRY_NEEDS_COPY;
|
1996-06-16 20:37:31 +00:00
|
|
|
if (copyflag &&
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
((entry->protection & VM_PROT_WRITE) != 0)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_object_shadow(&entry->object.vm_object,
|
|
|
|
&entry->offset,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(entry->end - entry->start));
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_NEEDS_COPY;
|
2001-02-04 06:19:28 +00:00
|
|
|
} else if (entry->object.vm_object == NULL &&
|
|
|
|
!map->system_map) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->object.vm_object =
|
1995-12-11 04:58:34 +00:00
|
|
|
vm_object_allocate(OBJT_DEFAULT,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(entry->end - entry->start));
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->offset = (vm_offset_t) 0;
|
|
|
|
}
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
entry->wired_count++;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Check for holes
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
if (entry->end < end &&
|
|
|
|
(entry->next == &map->header ||
|
|
|
|
entry->next->start > entry->end)) {
|
|
|
|
/*
|
|
|
|
* Found one. Object creation actions do not
|
|
|
|
* need to be undone, but the wired counts
|
|
|
|
* need to be restored.
|
|
|
|
*/
|
|
|
|
while (entry != &map->header && entry->end > start) {
|
|
|
|
entry->wired_count--;
|
|
|
|
entry = entry->prev;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry = entry->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Pass 2.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* HACK HACK HACK HACK
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
* If we are wiring in the kernel map or a submap of it,
|
|
|
|
* unlock the map to avoid deadlocks. We trust that the
|
|
|
|
* kernel is well-behaved, and therefore will not do
|
|
|
|
* anything destructive to this region of the map while
|
|
|
|
* we have it unlocked. We cannot trust user processes
|
|
|
|
* to do the same.
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* HACK HACK HACK HACK
|
|
|
|
*/
|
|
|
|
if (vm_map_pmap(map) == kernel_pmap) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_unlock(map); /* trust me ... */
|
|
|
|
} else {
|
1997-08-18 02:06:35 +00:00
|
|
|
vm_map_lock_downgrade(map);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
rv = 0;
|
|
|
|
entry = start_entry;
|
|
|
|
while (entry != &map->header && entry->start < end) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
|
|
|
* If vm_fault_wire fails for any page we need to undo
|
|
|
|
* what has been done. We decrement the wiring count
|
|
|
|
* for those pages which have not yet been wired (now)
|
|
|
|
* and unwire those that have (later).
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* XXX this violates the locking protocol on the map,
|
|
|
|
* needs to be fixed.
|
|
|
|
*/
|
|
|
|
if (rv)
|
|
|
|
entry->wired_count--;
|
|
|
|
else if (entry->wired_count == 1) {
|
|
|
|
rv = vm_fault_wire(map, entry->start, entry->end);
|
|
|
|
if (rv) {
|
|
|
|
failed = entry->start;
|
|
|
|
entry->wired_count--;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry = entry->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_pmap(map) == kernel_pmap) {
|
|
|
|
vm_map_lock(map);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
if (rv) {
|
2002-03-18 15:08:09 +00:00
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(void) vm_map_pageable(map, start, failed, TRUE);
|
|
|
|
return (rv);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-10-14 20:47:08 +00:00
|
|
|
/*
|
|
|
|
* An exclusive lock on the map is needed in order to call
|
|
|
|
* vm_map_simplify_entry(). If the current lock on the map
|
|
|
|
* is only a shared lock, an upgrade is needed.
|
|
|
|
*/
|
|
|
|
if (vm_map_pmap(map) != kernel_pmap &&
|
|
|
|
vm_map_lock_upgrade(map)) {
|
|
|
|
vm_map_lock(map);
|
|
|
|
if (vm_map_lookup_entry(map, start, &start_entry) ==
|
|
|
|
FALSE) {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return KERN_SUCCESS;
|
|
|
|
}
|
|
|
|
}
|
1996-12-28 23:07:49 +00:00
|
|
|
vm_map_simplify_entry(map, start_entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
vm_map_unlock(map);
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_clean
|
|
|
|
*
|
|
|
|
* Push any dirty cached pages in the address range to their pager.
|
|
|
|
* If syncio is TRUE, dirty pages are written synchronously.
|
|
|
|
* If invalidate is TRUE, any cached pages are freed as well.
|
|
|
|
*
|
|
|
|
* Returns an error if any part of the specified range is not mapped.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_clean(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
boolean_t syncio,
|
|
|
|
boolean_t invalidate)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t current;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_t entry;
|
|
|
|
vm_size_t size;
|
|
|
|
vm_object_t object;
|
1995-12-11 04:58:34 +00:00
|
|
|
vm_ooffset_t offset;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock_read(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
if (!vm_map_lookup_entry(map, start, &entry)) {
|
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ADDRESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Make a first pass to check for holes.
|
|
|
|
*/
|
|
|
|
for (current = entry; current->start < end; current = current->next) {
|
1997-01-16 04:16:22 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
if (end > current->end &&
|
|
|
|
(current->next == &map->header ||
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
current->end != current->next->start)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ADDRESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1998-05-21 07:47:58 +00:00
|
|
|
if (invalidate)
|
|
|
|
pmap_remove(vm_map_pmap(map), start, end);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Make a second pass, cleaning/uncaching pages from the indicated
|
|
|
|
* objects as we go.
|
|
|
|
*/
|
|
|
|
for (current = entry; current->start < end; current = current->next) {
|
|
|
|
offset = current->offset + (start - current->start);
|
|
|
|
size = (end <= current->end ? end : current->end) - start;
|
1999-02-07 21:48:23 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_t smap;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_t tentry;
|
|
|
|
vm_size_t tsize;
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
smap = current->object.sub_map;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock_read(smap);
|
|
|
|
(void) vm_map_lookup_entry(smap, offset, &tentry);
|
|
|
|
tsize = tentry->end - offset;
|
|
|
|
if (tsize < size)
|
|
|
|
size = tsize;
|
|
|
|
object = tentry->object.vm_object;
|
|
|
|
offset = tentry->offset + (offset - tentry->start);
|
|
|
|
vm_map_unlock_read(smap);
|
|
|
|
} else {
|
|
|
|
object = current->object.vm_object;
|
|
|
|
}
|
1996-03-04 02:04:24 +00:00
|
|
|
/*
|
|
|
|
* Note that there is absolutely no sense in writing out
|
|
|
|
* anonymous objects, so we track down the vnode object
|
|
|
|
* to write out.
|
|
|
|
* We invalidate (remove) all pages from the address space
|
|
|
|
* anyway, for semantic correctness.
|
2002-03-07 03:54:56 +00:00
|
|
|
*
|
|
|
|
* note: certain anonymous maps, such as MAP_NOSYNC maps,
|
|
|
|
* may start out with a NULL object.
|
1996-03-04 02:04:24 +00:00
|
|
|
*/
|
2002-03-07 03:54:56 +00:00
|
|
|
while (object && object->backing_object) {
|
1996-03-04 02:04:24 +00:00
|
|
|
object = object->backing_object;
|
|
|
|
offset += object->backing_object_offset;
|
2002-03-10 21:52:48 +00:00
|
|
|
if (object->size < OFF_TO_IDX(offset + size))
|
1996-03-04 02:04:24 +00:00
|
|
|
size = IDX_TO_OFF(object->size) - offset;
|
|
|
|
}
|
2000-01-21 20:17:01 +00:00
|
|
|
if (object && (object->type == OBJT_VNODE) &&
|
|
|
|
(current->protection & VM_PROT_WRITE)) {
|
1995-02-14 04:00:17 +00:00
|
|
|
/*
|
2000-01-21 20:17:01 +00:00
|
|
|
* Flush pages if writing is allowed, invalidate them
|
|
|
|
* if invalidation requested. Pages undergoing I/O
|
|
|
|
* will be ignored by vm_object_page_remove().
|
1995-03-22 12:24:11 +00:00
|
|
|
*
|
2000-01-21 20:17:01 +00:00
|
|
|
* We cannot lock the vnode and then wait for paging
|
|
|
|
* to complete without deadlocking against vm_fault.
|
|
|
|
* Instead we simply call vm_object_page_remove() and
|
|
|
|
* allow it to block internally on a page-by-page
|
|
|
|
* basis when it encounters pages undergoing async
|
|
|
|
* I/O.
|
1995-02-14 04:00:17 +00:00
|
|
|
*/
|
2000-01-21 20:17:01 +00:00
|
|
|
int flags;
|
|
|
|
|
|
|
|
vm_object_reference(object);
|
2001-09-12 08:38:13 +00:00
|
|
|
vn_lock(object->handle, LK_EXCLUSIVE | LK_RETRY, curthread);
|
2000-01-21 20:17:01 +00:00
|
|
|
flags = (syncio || invalidate) ? OBJPC_SYNC : 0;
|
|
|
|
flags |= invalidate ? OBJPC_INVAL : 0;
|
|
|
|
vm_object_page_clean(object,
|
|
|
|
OFF_TO_IDX(offset),
|
|
|
|
OFF_TO_IDX(offset + size + PAGE_MASK),
|
|
|
|
flags);
|
|
|
|
if (invalidate) {
|
|
|
|
/*vm_object_pip_wait(object, "objmcl");*/
|
|
|
|
vm_object_page_remove(object,
|
|
|
|
OFF_TO_IDX(offset),
|
|
|
|
OFF_TO_IDX(offset + size + PAGE_MASK),
|
|
|
|
FALSE);
|
1996-02-11 22:03:49 +00:00
|
|
|
}
|
2001-09-12 08:38:13 +00:00
|
|
|
VOP_UNLOCK(object->handle, 0, curthread);
|
2000-01-21 20:17:01 +00:00
|
|
|
vm_object_deallocate(object);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
start += size;
|
|
|
|
}
|
|
|
|
|
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_entry_unwire: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Make the region specified by this entry pageable.
|
|
|
|
*
|
|
|
|
* The map in question should be locked.
|
|
|
|
* [This is the reason for this routine's existence.]
|
|
|
|
*/
|
1996-12-07 07:44:05 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
vm_fault_unwire(map, entry->start, entry->end);
|
|
|
|
entry->wired_count = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_entry_delete: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Deallocate the given entry from the target map.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
1996-12-07 07:44:05 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_delete(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
vm_map_entry_unlink(map, entry);
|
|
|
|
map->size -= entry->end - entry->start;
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_object_deallocate(entry->object.vm_object);
|
1996-06-16 20:37:31 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_entry_dispose(map, entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_delete: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Deallocates the given address range from the target
|
|
|
|
* map.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_delete(vm_map_t map, vm_offset_t start, vm_offset_t end)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-05-04 03:01:44 +00:00
|
|
|
vm_object_t object;
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t first_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Find the start of the region, and clip it
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-04-04 07:11:02 +00:00
|
|
|
if (!vm_map_lookup_entry(map, start, &first_entry))
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = first_entry->next;
|
1999-04-04 07:11:02 +00:00
|
|
|
else {
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = first_entry;
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Save the free space hint
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-05-18 03:38:05 +00:00
|
|
|
if (entry == &map->header) {
|
|
|
|
map->first_free = &map->header;
|
1998-04-28 05:54:47 +00:00
|
|
|
} else if (map->first_free->start >= start) {
|
1994-05-24 10:09:53 +00:00
|
|
|
map->first_free = entry->prev;
|
1998-04-28 05:54:47 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Step through all entries in this region
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t next;
|
1996-05-18 03:38:05 +00:00
|
|
|
vm_offset_t s, e;
|
1998-05-04 03:01:44 +00:00
|
|
|
vm_pindex_t offidxstart, offidxend, count;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
|
|
|
|
s = entry->start;
|
|
|
|
e = entry->end;
|
1998-04-29 04:28:22 +00:00
|
|
|
next = entry->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1998-05-04 03:01:44 +00:00
|
|
|
offidxstart = OFF_TO_IDX(entry->offset);
|
|
|
|
count = OFF_TO_IDX(e - s);
|
|
|
|
object = entry->object.vm_object;
|
1998-04-28 05:54:47 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Unwire before removing addresses from the pmap; otherwise,
|
|
|
|
* unwiring will put the entries back in the pmap.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1998-04-29 04:28:22 +00:00
|
|
|
if (entry->wired_count != 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_unwire(map, entry);
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1998-05-04 03:01:44 +00:00
|
|
|
offidxend = offidxstart + count;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1998-04-29 04:28:22 +00:00
|
|
|
if ((object == kernel_object) || (object == kmem_object)) {
|
1998-04-28 05:54:47 +00:00
|
|
|
vm_object_page_remove(object, offidxstart, offidxend, FALSE);
|
1996-05-18 03:38:05 +00:00
|
|
|
} else {
|
2002-05-26 04:54:56 +00:00
|
|
|
mtx_lock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
pmap_remove(map->pmap, s, e);
|
1999-04-04 07:11:02 +00:00
|
|
|
if (object != NULL &&
|
|
|
|
object->ref_count != 1 &&
|
|
|
|
(object->flags & (OBJ_NOSPLIT|OBJ_ONEMAPPING)) == OBJ_ONEMAPPING &&
|
|
|
|
(object->type == OBJT_DEFAULT || object->type == OBJT_SWAP)) {
|
1998-04-28 05:54:47 +00:00
|
|
|
vm_object_collapse(object);
|
|
|
|
vm_object_page_remove(object, offidxstart, offidxend, FALSE);
|
|
|
|
if (object->type == OBJT_SWAP) {
|
1998-05-04 03:01:44 +00:00
|
|
|
swap_pager_freespace(object, offidxstart, count);
|
1998-04-28 05:54:47 +00:00
|
|
|
}
|
1999-04-04 07:11:02 +00:00
|
|
|
if (offidxend >= object->size &&
|
|
|
|
offidxstart < object->size) {
|
|
|
|
object->size = offidxstart;
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1998-04-28 05:54:47 +00:00
|
|
|
}
|
2002-05-26 04:54:56 +00:00
|
|
|
mtx_unlock(&Giant);
|
1996-05-18 03:38:05 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Delete the entry (which may delete the object) only after
|
|
|
|
* removing all pmap entries pointing to its pages.
|
|
|
|
* (Otherwise, its page frames may be reallocated, and any
|
|
|
|
* modify bits will be set in the wrong object!)
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_entry_delete(map, entry);
|
|
|
|
entry = next;
|
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_remove:
|
|
|
|
*
|
|
|
|
* Remove the given address range from the target map.
|
|
|
|
* This is the exported form of vm_map_delete.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_remove(vm_map_t map, vm_offset_t start, vm_offset_t end)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
int result, s = 0;
|
1994-12-15 22:47:11 +00:00
|
|
|
|
2001-06-22 06:35:32 +00:00
|
|
|
if (map == kmem_map)
|
1996-05-18 03:38:05 +00:00
|
|
|
s = splvm();
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
result = vm_map_delete(map, start, end);
|
|
|
|
vm_map_unlock(map);
|
|
|
|
|
2001-06-22 06:35:32 +00:00
|
|
|
if (map == kmem_map)
|
1994-12-15 22:47:11 +00:00
|
|
|
splx(s);
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (result);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_check_protection:
|
|
|
|
*
|
|
|
|
* Assert that the target map allows the specified
|
|
|
|
* privilege on the entire address region given.
|
|
|
|
* The entire region must be allocated.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
boolean_t
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_check_protection(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
|
|
|
vm_prot_t protection)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t tmp_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-03-17 03:19:31 +00:00
|
|
|
vm_map_lock_read(map);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (!vm_map_lookup_entry(map, start, &tmp_entry)) {
|
2002-03-17 03:19:31 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
entry = tmp_entry;
|
|
|
|
|
|
|
|
while (start < end) {
|
|
|
|
if (entry == &map->header) {
|
2002-03-17 03:19:31 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* No holes allowed!
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (start < entry->start) {
|
2002-03-17 03:19:31 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Check protection associated with entry.
|
|
|
|
*/
|
|
|
|
if ((entry->protection & protection) != protection) {
|
2002-03-17 03:19:31 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/* go to next entry */
|
|
|
|
start = entry->end;
|
|
|
|
entry = entry->next;
|
|
|
|
}
|
2002-03-17 03:19:31 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (TRUE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1998-05-02 06:36:16 +00:00
|
|
|
/*
|
|
|
|
* Split the pages in a map entry into a new object. This affords
|
|
|
|
* easier removal of unused pages, and keeps object inheritance from
|
|
|
|
* being a negative impact on memory usage.
|
|
|
|
*/
|
1998-04-29 04:28:22 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_split(vm_map_entry_t entry)
|
1998-04-29 04:28:22 +00:00
|
|
|
{
|
1998-05-02 06:36:16 +00:00
|
|
|
vm_page_t m;
|
1998-05-16 23:03:20 +00:00
|
|
|
vm_object_t orig_object, new_object, source;
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_offset_t s, e;
|
|
|
|
vm_pindex_t offidxstart, offidxend, idx;
|
|
|
|
vm_size_t size;
|
|
|
|
vm_ooffset_t offset;
|
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1998-04-29 04:28:22 +00:00
|
|
|
orig_object = entry->object.vm_object;
|
|
|
|
if (orig_object->type != OBJT_DEFAULT && orig_object->type != OBJT_SWAP)
|
|
|
|
return;
|
|
|
|
if (orig_object->ref_count <= 1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
offset = entry->offset;
|
|
|
|
s = entry->start;
|
|
|
|
e = entry->end;
|
|
|
|
|
|
|
|
offidxstart = OFF_TO_IDX(offset);
|
|
|
|
offidxend = offidxstart + OFF_TO_IDX(e - s);
|
|
|
|
size = offidxend - offidxstart;
|
|
|
|
|
|
|
|
new_object = vm_pager_allocate(orig_object->type,
|
1998-10-13 08:24:45 +00:00
|
|
|
NULL, IDX_TO_OFF(size), VM_PROT_ALL, 0LL);
|
1998-04-29 04:28:22 +00:00
|
|
|
if (new_object == NULL)
|
|
|
|
return;
|
|
|
|
|
1998-05-16 23:03:20 +00:00
|
|
|
source = orig_object->backing_object;
|
|
|
|
if (source != NULL) {
|
|
|
|
vm_object_reference(source); /* Referenced by new_object */
|
|
|
|
TAILQ_INSERT_TAIL(&source->shadow_head,
|
|
|
|
new_object, shadow_list);
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_clear_flag(source, OBJ_ONEMAPPING);
|
1998-05-16 23:03:20 +00:00
|
|
|
new_object->backing_object_offset =
|
1998-10-01 20:46:41 +00:00
|
|
|
orig_object->backing_object_offset + IDX_TO_OFF(offidxstart);
|
1998-05-16 23:03:20 +00:00
|
|
|
new_object->backing_object = source;
|
|
|
|
source->shadow_count++;
|
|
|
|
source->generation++;
|
|
|
|
}
|
1998-04-29 04:28:22 +00:00
|
|
|
for (idx = 0; idx < size; idx++) {
|
|
|
|
retry:
|
|
|
|
m = vm_page_lookup(orig_object, offidxstart + idx);
|
|
|
|
if (m == NULL)
|
|
|
|
continue;
|
1999-01-21 08:29:12 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We must wait for pending I/O to complete before we can
|
|
|
|
* rename the page.
|
1999-02-24 21:26:26 +00:00
|
|
|
*
|
|
|
|
* We do not have to VM_PROT_NONE the page as mappings should
|
|
|
|
* not be changed by this operation.
|
1999-01-21 08:29:12 +00:00
|
|
|
*/
|
|
|
|
if (vm_page_sleep_busy(m, TRUE, "spltwt"))
|
1998-04-29 04:28:22 +00:00
|
|
|
goto retry;
|
|
|
|
|
1998-09-04 08:06:57 +00:00
|
|
|
vm_page_busy(m);
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_page_rename(m, new_object, idx);
|
1999-01-24 06:04:52 +00:00
|
|
|
/* page automatically made dirty by rename and cache handled */
|
1998-09-04 08:06:57 +00:00
|
|
|
vm_page_busy(m);
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
|
|
|
if (orig_object->type == OBJT_SWAP) {
|
1998-08-06 08:33:19 +00:00
|
|
|
vm_object_pip_add(orig_object, 1);
|
1998-04-29 04:28:22 +00:00
|
|
|
/*
|
|
|
|
* copy orig_object pages into new_object
|
|
|
|
* and destroy unneeded pages in
|
|
|
|
* shadow object.
|
|
|
|
*/
|
1999-01-21 08:29:12 +00:00
|
|
|
swap_pager_copy(orig_object, new_object, offidxstart, 0);
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_pip_wakeup(orig_object);
|
|
|
|
}
|
1998-05-02 06:36:16 +00:00
|
|
|
for (idx = 0; idx < size; idx++) {
|
|
|
|
m = vm_page_lookup(new_object, idx);
|
2002-06-02 19:32:05 +00:00
|
|
|
if (m != NULL)
|
1998-09-04 08:06:57 +00:00
|
|
|
vm_page_wakeup(m);
|
1998-05-02 06:36:16 +00:00
|
|
|
}
|
1998-04-29 04:28:22 +00:00
|
|
|
entry->object.vm_object = new_object;
|
|
|
|
entry->offset = 0LL;
|
|
|
|
vm_object_deallocate(orig_object);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_copy_entry:
|
|
|
|
*
|
|
|
|
* Copies the contents of the source entry to the destination
|
|
|
|
* entry. The entries *must* be aligned properly.
|
|
|
|
*/
|
1995-12-14 09:55:16 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_copy_entry(
|
|
|
|
vm_map_t src_map,
|
|
|
|
vm_map_t dst_map,
|
|
|
|
vm_map_entry_t src_entry,
|
|
|
|
vm_map_entry_t dst_entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_t src_object;
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((dst_entry->eflags|src_entry->eflags) & MAP_ENTRY_IS_SUB_MAP)
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
if (src_entry->wired_count == 0) {
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If the source entry is marked needs_copy, it is already
|
|
|
|
* write-protected.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1997-01-16 04:16:22 +00:00
|
|
|
if ((src_entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
pmap_protect(src_map->pmap,
|
|
|
|
src_entry->start,
|
|
|
|
src_entry->end,
|
|
|
|
src_entry->protection & ~VM_PROT_WRITE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-05-18 03:38:05 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make a copy of the object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-01-28 00:57:57 +00:00
|
|
|
if ((src_object = src_entry->object.vm_object) != NULL) {
|
1998-04-29 04:28:22 +00:00
|
|
|
|
|
|
|
if ((src_object->handle == NULL) &&
|
|
|
|
(src_object->type == OBJT_DEFAULT ||
|
|
|
|
src_object->type == OBJT_SWAP)) {
|
|
|
|
vm_object_collapse(src_object);
|
1998-05-04 17:12:53 +00:00
|
|
|
if ((src_object->flags & (OBJ_NOSPLIT|OBJ_ONEMAPPING)) == OBJ_ONEMAPPING) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_split(src_entry);
|
|
|
|
src_object = src_entry->object.vm_object;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
vm_object_reference(src_object);
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_clear_flag(src_object, OBJ_ONEMAPPING);
|
1998-04-29 04:28:22 +00:00
|
|
|
dst_entry->object.vm_object = src_object;
|
1997-01-16 04:16:22 +00:00
|
|
|
src_entry->eflags |= (MAP_ENTRY_COW|MAP_ENTRY_NEEDS_COPY);
|
|
|
|
dst_entry->eflags |= (MAP_ENTRY_COW|MAP_ENTRY_NEEDS_COPY);
|
1996-05-18 03:38:05 +00:00
|
|
|
dst_entry->offset = src_entry->offset;
|
|
|
|
} else {
|
|
|
|
dst_entry->object.vm_object = NULL;
|
|
|
|
dst_entry->offset = 0;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
pmap_copy(dst_map->pmap, src_map->pmap, dst_entry->start,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
dst_entry->end - dst_entry->start, src_entry->start);
|
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Of course, wired down pages can't be set copy-on-write.
|
|
|
|
* Cause wired pages to be copied into the new map by
|
|
|
|
* simulating faults (the new pages are pageable)
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vmspace_fork:
|
|
|
|
* Create a new process vmspace structure and vm_map
|
|
|
|
* based on those of an existing process. The new map
|
|
|
|
* is based on the old map, according to the inheritance
|
|
|
|
* values on the regions in that map.
|
|
|
|
*
|
|
|
|
* The source map must not be locked.
|
|
|
|
*/
|
|
|
|
struct vmspace *
|
2001-07-04 20:15:18 +00:00
|
|
|
vmspace_fork(struct vmspace *vm1)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
struct vmspace *vm2;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_t old_map = &vm1->vm_map;
|
|
|
|
vm_map_t new_map;
|
|
|
|
vm_map_entry_t old_entry;
|
|
|
|
vm_map_entry_t new_entry;
|
1996-03-02 02:54:24 +00:00
|
|
|
vm_object_t object;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock(old_map);
|
2001-03-14 06:48:53 +00:00
|
|
|
old_map->infork = 1;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm2 = vmspace_alloc(old_map->min_offset, old_map->max_offset);
|
1994-05-24 10:09:53 +00:00
|
|
|
bcopy(&vm1->vm_startcopy, &vm2->vm_startcopy,
|
2002-02-05 21:23:05 +00:00
|
|
|
(caddr_t) &vm1->vm_endcopy - (caddr_t) &vm1->vm_startcopy);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_map = &vm2->vm_map; /* XXX */
|
1998-01-17 09:17:02 +00:00
|
|
|
new_map->timestamp = 1;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
old_entry = old_map->header.next;
|
|
|
|
|
|
|
|
while (old_entry != &old_map->header) {
|
1997-01-16 04:16:22 +00:00
|
|
|
if (old_entry->eflags & MAP_ENTRY_IS_SUB_MAP)
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("vm_map_fork: encountered a submap");
|
|
|
|
|
|
|
|
switch (old_entry->inheritance) {
|
|
|
|
case VM_INHERIT_NONE:
|
|
|
|
break;
|
|
|
|
|
1997-07-27 04:44:12 +00:00
|
|
|
case VM_INHERIT_SHARE:
|
|
|
|
/*
|
|
|
|
* Clone the entry, creating the shared object if necessary.
|
|
|
|
*/
|
|
|
|
object = old_entry->object.vm_object;
|
|
|
|
if (object == NULL) {
|
|
|
|
object = vm_object_allocate(OBJT_DEFAULT,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(old_entry->end - old_entry->start));
|
1997-07-27 04:44:12 +00:00
|
|
|
old_entry->object.vm_object = object;
|
|
|
|
old_entry->offset = (vm_offset_t) 0;
|
1999-05-28 03:39:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add the reference before calling vm_object_shadow
|
|
|
|
* to insure that a shadow object is created.
|
|
|
|
*/
|
|
|
|
vm_object_reference(object);
|
|
|
|
if (old_entry->eflags & MAP_ENTRY_NEEDS_COPY) {
|
1997-01-31 04:10:41 +00:00
|
|
|
vm_object_shadow(&old_entry->object.vm_object,
|
1997-07-27 04:44:12 +00:00
|
|
|
&old_entry->offset,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(old_entry->end - old_entry->start));
|
1997-01-31 04:10:41 +00:00
|
|
|
old_entry->eflags &= ~MAP_ENTRY_NEEDS_COPY;
|
2001-03-09 18:25:54 +00:00
|
|
|
/* Transfer the second reference too. */
|
|
|
|
vm_object_reference(
|
|
|
|
old_entry->object.vm_object);
|
|
|
|
vm_object_deallocate(object);
|
1997-01-31 04:10:41 +00:00
|
|
|
object = old_entry->object.vm_object;
|
|
|
|
}
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_clear_flag(object, OBJ_ONEMAPPING);
|
1997-01-22 01:34:48 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-03-27 23:46:04 +00:00
|
|
|
* Clone the entry, referencing the shared object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(new_map);
|
|
|
|
*new_entry = *old_entry;
|
2000-11-02 21:38:18 +00:00
|
|
|
new_entry->eflags &= ~MAP_ENTRY_USER_WIRED;
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry->wired_count = 0;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Insert the entry into the new map -- we know we're
|
|
|
|
* inserting at the end of the new map.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_entry_link(new_map, new_map->header.prev,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Update the physical map
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
pmap_copy(new_map->pmap, old_map->pmap,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_entry->start,
|
|
|
|
(old_entry->end - old_entry->start),
|
|
|
|
old_entry->start);
|
1994-05-24 10:09:53 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case VM_INHERIT_COPY:
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Clone the entry and link into the map.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(new_map);
|
|
|
|
*new_entry = *old_entry;
|
2000-11-02 21:38:18 +00:00
|
|
|
new_entry->eflags &= ~MAP_ENTRY_USER_WIRED;
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry->wired_count = 0;
|
|
|
|
new_entry->object.vm_object = NULL;
|
|
|
|
vm_map_entry_link(new_map, new_map->header.prev,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_entry);
|
1996-01-19 04:00:31 +00:00
|
|
|
vm_map_copy_entry(old_map, new_map, old_entry,
|
|
|
|
new_entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
old_entry = old_entry->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
new_map->size = old_map->size;
|
2001-03-14 06:48:53 +00:00
|
|
|
old_map->infork = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock(old_map);
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (vm2);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
int
|
|
|
|
vm_map_stack (vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize,
|
|
|
|
vm_prot_t prot, vm_prot_t max, int cow)
|
|
|
|
{
|
|
|
|
vm_map_entry_t prev_entry;
|
|
|
|
vm_map_entry_t new_stack_entry;
|
|
|
|
vm_size_t init_ssize;
|
|
|
|
int rv;
|
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
if (VM_MIN_ADDRESS > 0 && addrbos < VM_MIN_ADDRESS)
|
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
|
2001-10-10 23:06:54 +00:00
|
|
|
if (max_ssize < sgrowsiz)
|
1999-06-17 00:39:26 +00:00
|
|
|
init_ssize = max_ssize;
|
|
|
|
else
|
2001-10-10 23:06:54 +00:00
|
|
|
init_ssize = sgrowsiz;
|
1999-06-17 00:39:26 +00:00
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
|
|
|
|
/* If addr is already mapped, no go */
|
|
|
|
if (vm_map_lookup_entry(map, addrbos, &prev_entry)) {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If we can't accomodate max_ssize in the current mapping,
|
|
|
|
* no go. However, we need to be aware that subsequent user
|
|
|
|
* mappings might map into the space we have reserved for
|
|
|
|
* stack, and currently this space is not protected.
|
|
|
|
*
|
|
|
|
* Hopefully we will at least detect this condition
|
|
|
|
* when we try to grow the stack.
|
|
|
|
*/
|
|
|
|
if ((prev_entry->next != &map->header) &&
|
|
|
|
(prev_entry->next->start < addrbos + max_ssize)) {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We initially map a stack of only init_ssize. We will
|
|
|
|
* grow as needed later. Since this is to be a grow
|
|
|
|
* down stack, we map at the top of the range.
|
|
|
|
*
|
|
|
|
* Note: we would normally expect prot and max to be
|
|
|
|
* VM_PROT_ALL, and cow to be 0. Possibly we should
|
|
|
|
* eliminate these as input parameters, and just
|
|
|
|
* pass these values here in the insert call.
|
|
|
|
*/
|
|
|
|
rv = vm_map_insert(map, NULL, 0, addrbos + max_ssize - init_ssize,
|
|
|
|
addrbos + max_ssize, prot, max, cow);
|
|
|
|
|
|
|
|
/* Now set the avail_ssize amount */
|
|
|
|
if (rv == KERN_SUCCESS){
|
1999-06-17 05:49:00 +00:00
|
|
|
if (prev_entry != &map->header)
|
|
|
|
vm_map_clip_end(map, prev_entry, addrbos + max_ssize - init_ssize);
|
1999-06-17 00:39:26 +00:00
|
|
|
new_stack_entry = prev_entry->next;
|
|
|
|
if (new_stack_entry->end != addrbos + max_ssize ||
|
|
|
|
new_stack_entry->start != addrbos + max_ssize - init_ssize)
|
|
|
|
panic ("Bad entry start/end for new stack entry");
|
|
|
|
else
|
|
|
|
new_stack_entry->avail_ssize = max_ssize - init_ssize;
|
|
|
|
}
|
|
|
|
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (rv);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Attempts to grow a vm stack entry. Returns KERN_SUCCESS if the
|
|
|
|
* desired address is already mapped, or if we successfully grow
|
|
|
|
* the stack. Also returns KERN_SUCCESS if addr is outside the
|
|
|
|
* stack range (this is strange, but preserves compatibility with
|
|
|
|
* the grow function in vm_machdep.c).
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_map_growstack (struct proc *p, vm_offset_t addr)
|
|
|
|
{
|
|
|
|
vm_map_entry_t prev_entry;
|
|
|
|
vm_map_entry_t stack_entry;
|
|
|
|
vm_map_entry_t new_stack_entry;
|
|
|
|
struct vmspace *vm = p->p_vmspace;
|
|
|
|
vm_map_t map = &vm->vm_map;
|
|
|
|
vm_offset_t end;
|
|
|
|
int grow_amount;
|
|
|
|
int rv;
|
|
|
|
int is_procstack;
|
2001-07-04 16:20:28 +00:00
|
|
|
|
|
|
|
GIANT_REQUIRED;
|
2001-05-19 01:28:09 +00:00
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
Retry:
|
|
|
|
vm_map_lock_read(map);
|
|
|
|
|
|
|
|
/* If addr is already in the entry range, no need to grow.*/
|
|
|
|
if (vm_map_lookup_entry(map, addr, &prev_entry)) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_SUCCESS);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if ((stack_entry = prev_entry->next) == &map->header) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_SUCCESS);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
if (prev_entry == &map->header)
|
|
|
|
end = stack_entry->start - stack_entry->avail_ssize;
|
|
|
|
else
|
|
|
|
end = prev_entry->end;
|
|
|
|
|
|
|
|
/* This next test mimics the old grow function in vm_machdep.c.
|
|
|
|
* It really doesn't quite make sense, but we do it anyway
|
|
|
|
* for compatibility.
|
|
|
|
*
|
|
|
|
* If not growable stack, return success. This signals the
|
|
|
|
* caller to proceed as he would normally with normal vm.
|
|
|
|
*/
|
|
|
|
if (stack_entry->avail_ssize < 1 ||
|
|
|
|
addr >= stack_entry->start ||
|
|
|
|
addr < stack_entry->start - stack_entry->avail_ssize) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_SUCCESS);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Find the minimum grow amount */
|
|
|
|
grow_amount = roundup (stack_entry->start - addr, PAGE_SIZE);
|
|
|
|
if (grow_amount > stack_entry->avail_ssize) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If there is no longer enough space between the entries
|
|
|
|
* nogo, and adjust the available space. Note: this
|
|
|
|
* should only happen if the user has mapped into the
|
|
|
|
* stack area after the stack was created, and is
|
|
|
|
* probably an error.
|
|
|
|
*
|
|
|
|
* This also effectively destroys any guard page the user
|
|
|
|
* might have intended by limiting the stack size.
|
|
|
|
*/
|
|
|
|
if (grow_amount > stack_entry->start - end) {
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1999-06-17 00:39:26 +00:00
|
|
|
goto Retry;
|
|
|
|
|
|
|
|
stack_entry->avail_ssize = stack_entry->start - end;
|
|
|
|
|
|
|
|
vm_map_unlock(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
is_procstack = addr >= (vm_offset_t)vm->vm_maxsaddr;
|
|
|
|
|
|
|
|
/* If this is the main process stack, see if we're over the
|
|
|
|
* stack limit.
|
|
|
|
*/
|
1999-06-17 21:29:38 +00:00
|
|
|
if (is_procstack && (ctob(vm->vm_ssize) + grow_amount >
|
1999-06-17 00:39:26 +00:00
|
|
|
p->p_rlimit[RLIMIT_STACK].rlim_cur)) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Round up the grow amount modulo SGROWSIZ */
|
2001-10-10 23:06:54 +00:00
|
|
|
grow_amount = roundup (grow_amount, sgrowsiz);
|
1999-06-17 00:39:26 +00:00
|
|
|
if (grow_amount > stack_entry->avail_ssize) {
|
|
|
|
grow_amount = stack_entry->avail_ssize;
|
|
|
|
}
|
1999-06-17 21:29:38 +00:00
|
|
|
if (is_procstack && (ctob(vm->vm_ssize) + grow_amount >
|
1999-06-17 00:39:26 +00:00
|
|
|
p->p_rlimit[RLIMIT_STACK].rlim_cur)) {
|
|
|
|
grow_amount = p->p_rlimit[RLIMIT_STACK].rlim_cur -
|
1999-06-17 21:29:38 +00:00
|
|
|
ctob(vm->vm_ssize);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1999-06-17 00:39:26 +00:00
|
|
|
goto Retry;
|
|
|
|
|
|
|
|
/* Get the preliminary new entry start value */
|
|
|
|
addr = stack_entry->start - grow_amount;
|
|
|
|
|
|
|
|
/* If this puts us into the previous entry, cut back our growth
|
|
|
|
* to the available space. Also, see the note above.
|
|
|
|
*/
|
|
|
|
if (addr < end) {
|
|
|
|
stack_entry->avail_ssize = stack_entry->start - end;
|
|
|
|
addr = end;
|
|
|
|
}
|
|
|
|
|
|
|
|
rv = vm_map_insert(map, NULL, 0, addr, stack_entry->start,
|
1999-06-17 05:49:00 +00:00
|
|
|
VM_PROT_ALL,
|
|
|
|
VM_PROT_ALL,
|
1999-06-17 00:39:26 +00:00
|
|
|
0);
|
|
|
|
|
|
|
|
/* Adjust the available stack space by the amount we grew. */
|
|
|
|
if (rv == KERN_SUCCESS) {
|
1999-06-17 05:49:00 +00:00
|
|
|
if (prev_entry != &map->header)
|
|
|
|
vm_map_clip_end(map, prev_entry, addr);
|
1999-06-17 00:39:26 +00:00
|
|
|
new_stack_entry = prev_entry->next;
|
|
|
|
if (new_stack_entry->end != stack_entry->start ||
|
|
|
|
new_stack_entry->start != addr)
|
|
|
|
panic ("Bad stack grow start/end in new stack entry");
|
|
|
|
else {
|
|
|
|
new_stack_entry->avail_ssize = stack_entry->avail_ssize -
|
|
|
|
(new_stack_entry->end -
|
|
|
|
new_stack_entry->start);
|
|
|
|
if (is_procstack)
|
1999-06-17 21:29:38 +00:00
|
|
|
vm->vm_ssize += btoc(new_stack_entry->end -
|
|
|
|
new_stack_entry->start);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
vm_map_unlock(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (rv);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
1997-04-13 01:48:35 +00:00
|
|
|
/*
|
|
|
|
* Unshare the specified VM space for exec. If other processes are
|
|
|
|
* mapped to it, then create a new one. The new vmspace is null.
|
|
|
|
*/
|
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vmspace_exec(struct proc *p)
|
|
|
|
{
|
1997-04-13 01:48:35 +00:00
|
|
|
struct vmspace *oldvmspace = p->p_vmspace;
|
|
|
|
struct vmspace *newvmspace;
|
|
|
|
vm_map_t map = &p->p_vmspace->vm_map;
|
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
newvmspace = vmspace_alloc(map->min_offset, map->max_offset);
|
1997-04-13 01:48:35 +00:00
|
|
|
bcopy(&oldvmspace->vm_startcopy, &newvmspace->vm_startcopy,
|
|
|
|
(caddr_t) (newvmspace + 1) - (caddr_t) &newvmspace->vm_startcopy);
|
|
|
|
/*
|
|
|
|
* This code is written like this for prototype purposes. The
|
|
|
|
* goal is to avoid running down the vmspace here, but let the
|
|
|
|
* other process's that are still using the vmspace to finally
|
|
|
|
* run it down. Even though there is little or no chance of blocking
|
|
|
|
* here, it is a good idea to keep this form for future mods.
|
|
|
|
*/
|
|
|
|
p->p_vmspace = newvmspace;
|
1999-07-21 18:02:27 +00:00
|
|
|
pmap_pinit2(vmspace_pmap(newvmspace));
|
2001-05-23 22:38:00 +00:00
|
|
|
vmspace_free(oldvmspace);
|
2001-09-12 08:38:13 +00:00
|
|
|
if (p == curthread->td_proc) /* XXXKSE ? */
|
|
|
|
pmap_activate(curthread);
|
1997-04-13 01:48:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unshare the specified VM space for forcing COW. This
|
|
|
|
* is called by rfork, for the (RFMEM|RFPROC) == 0 case.
|
|
|
|
*/
|
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vmspace_unshare(struct proc *p)
|
|
|
|
{
|
1997-04-13 01:48:35 +00:00
|
|
|
struct vmspace *oldvmspace = p->p_vmspace;
|
|
|
|
struct vmspace *newvmspace;
|
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
1997-04-13 01:48:35 +00:00
|
|
|
if (oldvmspace->vm_refcnt == 1)
|
|
|
|
return;
|
|
|
|
newvmspace = vmspace_fork(oldvmspace);
|
|
|
|
p->p_vmspace = newvmspace;
|
1999-07-21 18:02:27 +00:00
|
|
|
pmap_pinit2(vmspace_pmap(newvmspace));
|
2001-05-23 22:38:00 +00:00
|
|
|
vmspace_free(oldvmspace);
|
2001-09-12 08:38:13 +00:00
|
|
|
if (p == curthread->td_proc) /* XXXKSE ? */
|
|
|
|
pmap_activate(curthread);
|
1997-04-13 01:48:35 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_lookup:
|
|
|
|
*
|
|
|
|
* Finds the VM object, offset, and
|
|
|
|
* protection for a given virtual address in the
|
|
|
|
* specified map, assuming a page fault of the
|
|
|
|
* type specified.
|
|
|
|
*
|
|
|
|
* Leaves the map in question locked for read; return
|
|
|
|
* values are guaranteed until a vm_map_lookup_done
|
|
|
|
* call is performed. Note that the map argument
|
|
|
|
* is in/out; the returned map must be used in
|
|
|
|
* the call to vm_map_lookup_done.
|
|
|
|
*
|
|
|
|
* A handle (out_entry) is returned for use in
|
|
|
|
* vm_map_lookup_done, to make that fast.
|
|
|
|
*
|
|
|
|
* If a lookup is requested with "write protection"
|
|
|
|
* specified, the map may be changed to perform virtual
|
|
|
|
* copying operations, although the data referenced will
|
|
|
|
* remain the same.
|
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_lookup(vm_map_t *var_map, /* IN/OUT */
|
|
|
|
vm_offset_t vaddr,
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_prot_t fault_typea,
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_entry_t *out_entry, /* OUT */
|
|
|
|
vm_object_t *object, /* OUT */
|
|
|
|
vm_pindex_t *pindex, /* OUT */
|
|
|
|
vm_prot_t *out_prot, /* OUT */
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
boolean_t *wired) /* OUT */
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
|
|
|
vm_map_t map = *var_map;
|
|
|
|
vm_prot_t prot;
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_prot_t fault_type = fault_typea;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
RetryLookup:;
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Lookup the faulting address.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
vm_map_lock_read(map);
|
|
|
|
#define RETURN(why) \
|
|
|
|
{ \
|
|
|
|
vm_map_unlock_read(map); \
|
2002-03-10 21:52:48 +00:00
|
|
|
return (why); \
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If the map has an interesting hint, try it before calling full
|
|
|
|
* blown lookup routine.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2002-05-24 01:33:24 +00:00
|
|
|
entry = map->root;
|
1994-05-24 10:09:53 +00:00
|
|
|
*out_entry = entry;
|
2002-05-24 01:33:24 +00:00
|
|
|
if (entry == NULL ||
|
1994-05-24 10:09:53 +00:00
|
|
|
(vaddr < entry->start) || (vaddr >= entry->end)) {
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Entry was either not a valid hint, or the vaddr was not
|
|
|
|
* contained in the entry, so do a full lookup.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2002-05-24 01:33:24 +00:00
|
|
|
if (!vm_map_lookup_entry(map, vaddr, out_entry))
|
1994-05-24 10:09:53 +00:00
|
|
|
RETURN(KERN_INVALID_ADDRESS);
|
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
entry = *out_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-12-28 23:07:49 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Handle submaps.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_t old_map = map;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
*var_map = map = entry->object.sub_map;
|
|
|
|
vm_map_unlock_read(old_map);
|
|
|
|
goto RetryLookup;
|
|
|
|
}
|
1997-04-06 02:29:45 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Check whether this task is allowed to have this page.
|
1997-04-06 02:29:45 +00:00
|
|
|
* Note the special case for MAP_ENTRY_COW
|
|
|
|
* pages with an override. This is to implement a forced
|
|
|
|
* COW for debuggers.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1998-01-21 12:18:00 +00:00
|
|
|
if (fault_type & VM_PROT_OVERRIDE_WRITE)
|
|
|
|
prot = entry->max_protection;
|
|
|
|
else
|
|
|
|
prot = entry->protection;
|
1998-01-17 09:17:02 +00:00
|
|
|
fault_type &= (VM_PROT_READ|VM_PROT_WRITE|VM_PROT_EXECUTE);
|
|
|
|
if ((fault_type & prot) != fault_type) {
|
|
|
|
RETURN(KERN_PROTECTION_FAILURE);
|
|
|
|
}
|
1999-11-23 06:51:28 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_USER_WIRED) &&
|
|
|
|
(entry->eflags & MAP_ENTRY_COW) &&
|
|
|
|
(fault_type & VM_PROT_WRITE) &&
|
|
|
|
(fault_typea & VM_PROT_OVERRIDE_WRITE) == 0) {
|
|
|
|
RETURN(KERN_PROTECTION_FAILURE);
|
1997-04-06 02:29:45 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If this page is not pageable, we have to get it for all possible
|
|
|
|
* accesses.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1994-10-09 01:52:19 +00:00
|
|
|
*wired = (entry->wired_count != 0);
|
|
|
|
if (*wired)
|
1994-05-24 10:09:53 +00:00
|
|
|
prot = fault_type = entry->protection;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If the entry was copy-on-write, we either ...
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_NEEDS_COPY) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
|
|
|
* If we want to write the page, we may as well handle that
|
1999-03-27 23:46:04 +00:00
|
|
|
* now since we've got the map locked.
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If we don't need to write the page, we just demote the
|
|
|
|
* permissions allowed.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (fault_type & VM_PROT_WRITE) {
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make a new object, and place it in the object
|
|
|
|
* chain. Note that no new references have appeared
|
1999-03-27 23:46:04 +00:00
|
|
|
* -- one just moved from the map to the new
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1994-05-24 10:09:53 +00:00
|
|
|
goto RetryLookup;
|
2002-05-31 03:48:55 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_object_shadow(
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
&entry->object.vm_object,
|
|
|
|
&entry->offset,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(entry->end - entry->start));
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_NEEDS_COPY;
|
2002-05-31 03:48:55 +00:00
|
|
|
|
1999-02-19 03:11:37 +00:00
|
|
|
vm_map_lock_downgrade(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* We're attempting to read a copy-on-write page --
|
|
|
|
* don't allow writes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
prot &= ~VM_PROT_WRITE;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Create an object if necessary.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
if (entry->object.vm_object == NULL &&
|
|
|
|
!map->system_map) {
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1994-05-24 10:09:53 +00:00
|
|
|
goto RetryLookup;
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
entry->object.vm_object = vm_object_allocate(OBJT_DEFAULT,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(entry->end - entry->start));
|
1994-05-24 10:09:53 +00:00
|
|
|
entry->offset = 0;
|
1999-02-19 03:11:37 +00:00
|
|
|
vm_map_lock_downgrade(map);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-06-16 20:37:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Return the object/offset from this entry. If the entry was
|
|
|
|
* copy-on-write or empty, it has been fixed up.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-02-19 03:11:37 +00:00
|
|
|
*pindex = OFF_TO_IDX((vaddr - entry->start) + entry->offset);
|
1994-05-24 10:09:53 +00:00
|
|
|
*object = entry->object.vm_object;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Return whether this is the only map sharing this data.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
*out_prot = prot;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#undef RETURN
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_lookup_done:
|
|
|
|
*
|
|
|
|
* Releases locks acquired by a vm_map_lookup
|
|
|
|
* (according to the handle returned by that lookup).
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_lookup_done(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Unlock the main-level map
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
}
|
|
|
|
|
2002-05-05 22:42:40 +00:00
|
|
|
#ifdef ENABLE_VFS_IOOPT
|
1997-12-19 09:03:37 +00:00
|
|
|
/*
|
2002-05-05 22:42:40 +00:00
|
|
|
* Experimental support for zero-copy I/O
|
|
|
|
*
|
1997-12-19 09:03:37 +00:00
|
|
|
* Implement uiomove with VM operations. This handles (and collateral changes)
|
|
|
|
* support every combination of source object modification, and COW type
|
|
|
|
* operations.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_uiomove(
|
|
|
|
vm_map_t mapa,
|
|
|
|
vm_object_t srcobject,
|
|
|
|
off_t cp,
|
|
|
|
int cnta,
|
|
|
|
vm_offset_t uaddra,
|
|
|
|
int *npages)
|
1997-12-19 09:03:37 +00:00
|
|
|
{
|
|
|
|
vm_map_t map;
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_object_t first_object, oldobject, object;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_map_entry_t entry;
|
1997-12-19 09:03:37 +00:00
|
|
|
vm_prot_t prot;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
boolean_t wired;
|
1997-12-19 09:03:37 +00:00
|
|
|
int tcnt, rv;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_offset_t uaddr, start, end, tend;
|
1997-12-19 09:03:37 +00:00
|
|
|
vm_pindex_t first_pindex, osize, oindex;
|
|
|
|
off_t ooffset;
|
1998-01-17 09:17:02 +00:00
|
|
|
int cnt;
|
1997-12-19 09:03:37 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
if (npages)
|
|
|
|
*npages = 0;
|
|
|
|
|
1998-01-17 09:17:02 +00:00
|
|
|
cnt = cnta;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
uaddr = uaddra;
|
|
|
|
|
1997-12-19 09:03:37 +00:00
|
|
|
while (cnt > 0) {
|
|
|
|
map = mapa;
|
|
|
|
|
|
|
|
if ((vm_map_lookup(&map, uaddr,
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
VM_PROT_READ, &entry, &first_object,
|
|
|
|
&first_pindex, &prot, &wired)) != KERN_SUCCESS) {
|
1997-12-19 09:03:37 +00:00
|
|
|
return EFAULT;
|
|
|
|
}
|
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_map_clip_start(map, entry, uaddr);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
|
|
|
tcnt = cnt;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
tend = uaddr + tcnt;
|
|
|
|
if (tend > entry->end) {
|
|
|
|
tcnt = entry->end - uaddr;
|
|
|
|
tend = entry->end;
|
|
|
|
}
|
1997-12-19 09:03:37 +00:00
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_map_clip_end(map, entry, tend);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
start = entry->start;
|
|
|
|
end = entry->end;
|
1997-12-19 09:03:37 +00:00
|
|
|
|
1997-12-19 15:31:13 +00:00
|
|
|
osize = atop(tcnt);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
1998-01-12 01:46:33 +00:00
|
|
|
oindex = OFF_TO_IDX(cp);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
if (npages) {
|
1998-01-12 01:46:33 +00:00
|
|
|
vm_pindex_t idx;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
for (idx = 0; idx < osize; idx++) {
|
|
|
|
vm_page_t m;
|
1998-01-12 01:46:33 +00:00
|
|
|
if ((m = vm_page_lookup(srcobject, oindex + idx)) == NULL) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_map_lookup_done(map, entry);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
return 0;
|
|
|
|
}
|
1999-01-21 08:29:12 +00:00
|
|
|
/*
|
|
|
|
* disallow busy or invalid pages, but allow
|
|
|
|
* m->busy pages if they are entirely valid.
|
|
|
|
*/
|
1998-01-12 01:46:33 +00:00
|
|
|
if ((m->flags & PG_BUSY) ||
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
((m->valid & VM_PAGE_BITS_ALL) != VM_PAGE_BITS_ALL)) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_map_lookup_done(map, entry);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1997-12-19 09:03:37 +00:00
|
|
|
/*
|
|
|
|
* If we are changing an existing map entry, just redirect
|
|
|
|
* the object, and change mappings.
|
|
|
|
*/
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
if ((first_object->type == OBJT_VNODE) &&
|
|
|
|
((oldobject = entry->object.vm_object) == first_object)) {
|
|
|
|
|
|
|
|
if ((entry->offset != cp) || (oldobject != srcobject)) {
|
|
|
|
/*
|
|
|
|
* Remove old window into the file
|
|
|
|
*/
|
|
|
|
pmap_remove (map->pmap, uaddr, tend);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force copy on write for mmaped regions
|
|
|
|
*/
|
|
|
|
vm_object_pmap_copy_1 (srcobject, oindex, oindex + osize);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Point the object appropriately
|
|
|
|
*/
|
|
|
|
if (oldobject != srcobject) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the object optimization hint flag
|
|
|
|
*/
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_set_flag(srcobject, OBJ_OPT);
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_object_reference(srcobject);
|
|
|
|
entry->object.vm_object = srcobject;
|
|
|
|
|
|
|
|
if (oldobject) {
|
|
|
|
vm_object_deallocate(oldobject);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
entry->offset = cp;
|
|
|
|
map->timestamp++;
|
|
|
|
} else {
|
|
|
|
pmap_remove (map->pmap, uaddr, tend);
|
|
|
|
}
|
|
|
|
|
|
|
|
} else if ((first_object->ref_count == 1) &&
|
1998-01-17 09:17:02 +00:00
|
|
|
(first_object->size == osize) &&
|
|
|
|
((first_object->type == OBJT_DEFAULT) ||
|
|
|
|
(first_object->type == OBJT_SWAP)) ) {
|
1998-01-12 01:46:33 +00:00
|
|
|
|
1998-01-17 09:17:02 +00:00
|
|
|
oldobject = first_object->backing_object;
|
1998-01-12 01:46:33 +00:00
|
|
|
|
1998-01-17 09:17:02 +00:00
|
|
|
if ((first_object->backing_object_offset != cp) ||
|
|
|
|
(oldobject != srcobject)) {
|
1998-01-12 01:46:33 +00:00
|
|
|
/*
|
|
|
|
* Remove old window into the file
|
|
|
|
*/
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
pmap_remove (map->pmap, uaddr, tend);
|
1998-01-12 01:46:33 +00:00
|
|
|
|
|
|
|
/*
|
1998-01-17 09:17:02 +00:00
|
|
|
* Remove unneeded old pages
|
|
|
|
*/
|
1999-05-16 05:07:34 +00:00
|
|
|
vm_object_page_remove(first_object, 0, 0, 0);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
1998-01-12 01:46:33 +00:00
|
|
|
/*
|
1998-01-17 09:17:02 +00:00
|
|
|
* Invalidate swap space
|
|
|
|
*/
|
|
|
|
if (first_object->type == OBJT_SWAP) {
|
|
|
|
swap_pager_freespace(first_object,
|
1999-01-21 08:29:12 +00:00
|
|
|
0,
|
1998-01-17 09:17:02 +00:00
|
|
|
first_object->size);
|
1998-01-12 01:46:33 +00:00
|
|
|
}
|
1997-12-19 09:03:37 +00:00
|
|
|
|
1998-01-12 01:46:33 +00:00
|
|
|
/*
|
2002-03-10 21:52:48 +00:00
|
|
|
* Force copy on write for mmaped regions
|
|
|
|
*/
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_object_pmap_copy_1 (srcobject, oindex, oindex + osize);
|
1998-01-12 01:46:33 +00:00
|
|
|
|
|
|
|
/*
|
2002-03-10 21:52:48 +00:00
|
|
|
* Point the object appropriately
|
|
|
|
*/
|
1998-01-12 01:46:33 +00:00
|
|
|
if (oldobject != srcobject) {
|
2002-03-10 21:52:48 +00:00
|
|
|
/*
|
|
|
|
* Set the object optimization hint flag
|
|
|
|
*/
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_set_flag(srcobject, OBJ_OPT);
|
1998-01-12 01:46:33 +00:00
|
|
|
vm_object_reference(srcobject);
|
|
|
|
|
|
|
|
if (oldobject) {
|
|
|
|
TAILQ_REMOVE(&oldobject->shadow_head,
|
|
|
|
first_object, shadow_list);
|
|
|
|
oldobject->shadow_count--;
|
1999-09-21 05:00:48 +00:00
|
|
|
/* XXX bump generation? */
|
1998-01-12 01:46:33 +00:00
|
|
|
vm_object_deallocate(oldobject);
|
|
|
|
}
|
|
|
|
|
|
|
|
TAILQ_INSERT_TAIL(&srcobject->shadow_head,
|
|
|
|
first_object, shadow_list);
|
|
|
|
srcobject->shadow_count++;
|
1999-09-21 05:00:48 +00:00
|
|
|
/* XXX bump generation? */
|
1998-01-12 01:46:33 +00:00
|
|
|
|
|
|
|
first_object->backing_object = srcobject;
|
|
|
|
}
|
|
|
|
first_object->backing_object_offset = cp;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
map->timestamp++;
|
1998-01-12 01:46:33 +00:00
|
|
|
} else {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
pmap_remove (map->pmap, uaddr, tend);
|
1998-01-12 01:46:33 +00:00
|
|
|
}
|
1997-12-19 09:03:37 +00:00
|
|
|
/*
|
|
|
|
* Otherwise, we have to do a logical mmap.
|
|
|
|
*/
|
|
|
|
} else {
|
|
|
|
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_set_flag(srcobject, OBJ_OPT);
|
1998-01-12 01:46:33 +00:00
|
|
|
vm_object_reference(srcobject);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
pmap_remove (map->pmap, uaddr, tend);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_object_pmap_copy_1 (srcobject, oindex, oindex + osize);
|
2002-03-18 15:08:09 +00:00
|
|
|
vm_map_lock_upgrade(map);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
if (entry == &map->header) {
|
1997-12-19 09:03:37 +00:00
|
|
|
map->first_free = &map->header;
|
|
|
|
} else if (map->first_free->start >= start) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
map->first_free = entry->prev;
|
1997-12-19 09:03:37 +00:00
|
|
|
}
|
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm_map_entry_delete(map, entry);
|
|
|
|
|
|
|
|
object = srcobject;
|
|
|
|
ooffset = cp;
|
1997-12-19 09:03:37 +00:00
|
|
|
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
rv = vm_map_insert(map, object, ooffset, start, tend,
|
1999-05-14 23:09:34 +00:00
|
|
|
VM_PROT_ALL, VM_PROT_ALL, MAP_COPY_ON_WRITE);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
|
|
|
if (rv != KERN_SUCCESS)
|
|
|
|
panic("vm_uiomove: could not insert new entry: %d", rv);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Map the window directly, if it is already in memory
|
|
|
|
*/
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
pmap_object_init_pt(map->pmap, uaddr,
|
|
|
|
srcobject, oindex, tcnt, 0);
|
1997-12-19 09:03:37 +00:00
|
|
|
|
1998-01-17 09:17:02 +00:00
|
|
|
map->timestamp++;
|
1997-12-19 09:03:37 +00:00
|
|
|
vm_map_unlock(map);
|
|
|
|
|
|
|
|
cnt -= tcnt;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
uaddr += tcnt;
|
1997-12-19 09:03:37 +00:00
|
|
|
cp += tcnt;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
if (npages)
|
|
|
|
*npages += osize;
|
1997-12-19 09:03:37 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2002-05-05 22:42:40 +00:00
|
|
|
#endif
|
1997-12-19 09:03:37 +00:00
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
#include "opt_ddb.h"
|
1995-04-16 12:56:22 +00:00
|
|
|
#ifdef DDB
|
1996-09-14 11:54:59 +00:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
|
|
|
|
#include <ddb/ddb.h>
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_print: [ debug ]
|
|
|
|
*/
|
1996-09-14 11:54:59 +00:00
|
|
|
DB_SHOW_COMMAND(map, vm_map_print)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
static int nlines;
|
1996-09-14 11:54:59 +00:00
|
|
|
/* XXX convert args. */
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_t map = (vm_map_t)addr;
|
1996-09-14 11:54:59 +00:00
|
|
|
boolean_t full = have_addr;
|
|
|
|
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-03-02 05:43:18 +00:00
|
|
|
db_iprintf("Task map %p: pmap=%p, nentries=%d, version=%u\n",
|
|
|
|
(void *)map,
|
1998-07-14 12:14:58 +00:00
|
|
|
(void *)map->pmap, map->nentries, map->timestamp);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
nlines++;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
if (!full && db_indent)
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
for (entry = map->header.next; entry != &map->header;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry = entry->next) {
|
1998-07-11 11:30:46 +00:00
|
|
|
db_iprintf("map entry %p: start=%p, end=%p\n",
|
|
|
|
(void *)entry, (void *)entry->start, (void *)entry->end);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
nlines++;
|
1999-03-02 05:43:18 +00:00
|
|
|
{
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
static char *inheritance_name[4] =
|
|
|
|
{"share", "copy", "none", "donate_copy"};
|
|
|
|
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
db_iprintf(" prot=%x/%x/%s",
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->protection,
|
|
|
|
entry->max_protection,
|
1999-01-28 00:57:57 +00:00
|
|
|
inheritance_name[(int)(unsigned char)entry->inheritance]);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (entry->wired_count != 0)
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
db_printf(", wired");
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-02-07 21:48:23 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
1998-07-14 12:14:58 +00:00
|
|
|
/* XXX no %qd in kernel. Truncate entry->offset. */
|
|
|
|
db_printf(", share=%p, offset=0x%lx\n",
|
1999-02-07 21:48:23 +00:00
|
|
|
(void *)entry->object.sub_map,
|
1998-07-14 12:14:58 +00:00
|
|
|
(long)entry->offset);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
nlines++;
|
1994-05-24 10:09:53 +00:00
|
|
|
if ((entry->prev == &map->header) ||
|
1999-02-07 21:48:23 +00:00
|
|
|
(entry->prev->object.sub_map !=
|
|
|
|
entry->object.sub_map)) {
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
1998-07-14 12:14:58 +00:00
|
|
|
vm_map_print((db_expr_t)(intptr_t)
|
1999-02-07 21:48:23 +00:00
|
|
|
entry->object.sub_map,
|
1995-08-26 23:18:38 +00:00
|
|
|
full, 0, (char *)0);
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent -= 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else {
|
1998-07-14 12:14:58 +00:00
|
|
|
/* XXX no %qd in kernel. Truncate entry->offset. */
|
|
|
|
db_printf(", object=%p, offset=0x%lx",
|
|
|
|
(void *)entry->object.vm_object,
|
|
|
|
(long)entry->offset);
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_COW)
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf(", copy (%s)",
|
1997-01-16 04:16:22 +00:00
|
|
|
(entry->eflags & MAP_ENTRY_NEEDS_COPY) ? "needed" : "done");
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf("\n");
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
nlines++;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if ((entry->prev == &map->header) ||
|
|
|
|
(entry->prev->object.vm_object !=
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->object.vm_object)) {
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
1998-07-14 12:14:58 +00:00
|
|
|
vm_object_print((db_expr_t)(intptr_t)
|
|
|
|
entry->object.vm_object,
|
1995-08-26 23:18:38 +00:00
|
|
|
full, 0, (char *)0);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
nlines += 4;
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent -= 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent -= 2;
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
if (db_indent == 0)
|
|
|
|
nlines = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
|
|
|
|
|
|
|
DB_SHOW_COMMAND(procvm, procvm)
|
|
|
|
{
|
|
|
|
struct proc *p;
|
|
|
|
|
|
|
|
if (have_addr) {
|
|
|
|
p = (struct proc *) addr;
|
|
|
|
} else {
|
|
|
|
p = curproc;
|
|
|
|
}
|
|
|
|
|
1998-07-11 07:46:16 +00:00
|
|
|
db_printf("p = %p, vmspace = %p, map = %p, pmap = %p\n",
|
|
|
|
(void *)p, (void *)p->p_vmspace, (void *)&p->p_vmspace->vm_map,
|
1999-02-19 14:25:37 +00:00
|
|
|
(void *)vmspace_pmap(p->p_vmspace));
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
1998-07-14 12:14:58 +00:00
|
|
|
vm_map_print((db_expr_t)(intptr_t)&p->p_vmspace->vm_map, 1, 0, NULL);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
}
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
#endif /* DDB */
|