2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* The Mach Operating System project at Carnegie-Mellon University.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1994-08-02 07:55:43 +00:00
|
|
|
* from: @(#)vm_map.c 8.3 (Berkeley) 1/12/94
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Copyright (c) 1987, 1990 Carnegie-Mellon University.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Authors: Avadis Tevanian, Jr., Michael Wayne Young
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Permission to use, copy, modify and distribute this software and
|
|
|
|
* its documentation is hereby granted, provided that both the copyright
|
|
|
|
* notice and this permission notice appear in all copies of the
|
|
|
|
* software, derivative works or modified versions, and any portions
|
|
|
|
* thereof, and that both notices appear in supporting documentation.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
|
|
|
* CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
|
|
|
|
* CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
|
1994-05-24 10:09:53 +00:00
|
|
|
* FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Carnegie Mellon requests users of this software to return to
|
|
|
|
*
|
|
|
|
* Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
|
|
|
|
* School of Computer Science
|
|
|
|
* Carnegie Mellon University
|
|
|
|
* Pittsburgh PA 15213-3890
|
|
|
|
*
|
|
|
|
* any improvements or extensions that they make and grant Carnegie the
|
|
|
|
* rights to redistribute these changes.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Virtual memory mapping module.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 23:50:51 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
2010-11-14 17:53:52 +00:00
|
|
|
#include <sys/kernel.h>
|
2001-10-11 17:53:43 +00:00
|
|
|
#include <sys/ktr.h>
|
2001-05-01 08:13:21 +00:00
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/mutex.h>
|
1995-03-16 18:17:34 +00:00
|
|
|
#include <sys/proc.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <sys/vmmeter.h>
|
1996-05-19 07:36:50 +00:00
|
|
|
#include <sys/mman.h>
|
1997-12-19 09:03:37 +00:00
|
|
|
#include <sys/vnode.h>
|
2011-04-05 20:23:59 +00:00
|
|
|
#include <sys/racct.h>
|
1999-01-06 23:05:42 +00:00
|
|
|
#include <sys/resourcevar.h>
|
2013-02-20 10:38:34 +00:00
|
|
|
#include <sys/rwlock.h>
|
2003-09-23 18:56:54 +00:00
|
|
|
#include <sys/file.h>
|
2010-11-14 17:53:52 +00:00
|
|
|
#include <sys/sysctl.h>
|
2002-09-21 22:07:17 +00:00
|
|
|
#include <sys/sysent.h>
|
2003-01-13 23:04:32 +00:00
|
|
|
#include <sys/shm.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <vm/vm.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_map.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm_page.h>
|
|
|
|
#include <vm/vm_object.h>
|
1998-01-17 09:17:02 +00:00
|
|
|
#include <vm/vm_pager.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <vm/vm_kern.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_extern.h>
|
2012-02-23 21:07:16 +00:00
|
|
|
#include <vm/vnode_pager.h>
|
2000-12-13 10:01:00 +00:00
|
|
|
#include <vm/swap_pager.h>
|
2002-03-20 04:02:59 +00:00
|
|
|
#include <vm/uma.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Virtual memory maps provide for the mapping, protection,
|
|
|
|
* and sharing of virtual memory objects. In addition,
|
|
|
|
* this module provides for an efficient virtual copy of
|
|
|
|
* memory from one map to another.
|
|
|
|
*
|
|
|
|
* Synchronization is required prior to most operations.
|
|
|
|
*
|
|
|
|
* Maps consist of an ordered doubly-linked list of simple
|
2008-12-31 05:44:05 +00:00
|
|
|
* entries; a self-adjusting binary search tree of these
|
|
|
|
* entries is used to speed up lookups.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
2000-03-26 15:20:23 +00:00
|
|
|
* Since portions of maps are specified by start/end addresses,
|
1994-05-24 10:09:53 +00:00
|
|
|
* which may not align with existing map entries, all
|
|
|
|
* routines merely "clip" entries to these start/end values.
|
|
|
|
* [That is, an entry is split into two, bordering at a
|
|
|
|
* start or end value.] Note that these clippings may not
|
|
|
|
* always be necessary (as the two resulting entries are then
|
|
|
|
* not changed); however, the clipping is done for convenience.
|
|
|
|
*
|
|
|
|
* As mentioned above, virtual copy operations are performed
|
1999-03-27 23:46:04 +00:00
|
|
|
* by copying VM object references from one map to
|
1994-05-24 10:09:53 +00:00
|
|
|
* another, and then marking both regions as copy-on-write.
|
|
|
|
*/
|
|
|
|
|
2002-12-30 00:41:33 +00:00
|
|
|
static struct mtx map_sleep_mtx;
|
2002-03-19 09:11:49 +00:00
|
|
|
static uma_zone_t mapentzone;
|
|
|
|
static uma_zone_t kmapentzone;
|
|
|
|
static uma_zone_t mapzone;
|
|
|
|
static uma_zone_t vmspace_zone;
|
2004-08-02 00:18:36 +00:00
|
|
|
static int vmspace_zinit(void *mem, int size, int flags);
|
|
|
|
static int vm_map_zinit(void *mem, int ize, int flags);
|
2010-04-03 19:07:05 +00:00
|
|
|
static void _vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min,
|
|
|
|
vm_offset_t max);
|
2010-09-18 15:03:31 +00:00
|
|
|
static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t system_map);
|
2009-02-24 20:57:43 +00:00
|
|
|
static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
|
2014-09-08 00:19:03 +00:00
|
|
|
static void vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot,
|
|
|
|
vm_object_t object, vm_pindex_t pindex, vm_size_t size, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
#ifdef INVARIANTS
|
|
|
|
static void vm_map_zdtor(void *mem, int size, void *arg);
|
|
|
|
static void vmspace_zdtor(void *mem, int size, void *arg);
|
|
|
|
#endif
|
2014-06-09 03:37:41 +00:00
|
|
|
static int vm_map_stack_locked(vm_map_t map, vm_offset_t addrbos,
|
|
|
|
vm_size_t max_ssize, vm_size_t growsize, vm_prot_t prot, vm_prot_t max,
|
|
|
|
int cow);
|
2014-08-02 16:10:24 +00:00
|
|
|
static void vm_map_wire_entry_failure(vm_map_t map, vm_map_entry_t entry,
|
|
|
|
vm_offset_t failed_addr);
|
1996-05-18 03:38:05 +00:00
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
#define ENTRY_CHARGED(e) ((e)->cred != NULL || \
|
|
|
|
((e)->object.vm_object != NULL && (e)->object.vm_object->cred != NULL && \
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
!((e)->eflags & MAP_ENTRY_NEEDS_COPY)))
|
|
|
|
|
2006-05-29 21:28:56 +00:00
|
|
|
/*
|
|
|
|
* PROC_VMSPACE_{UN,}LOCK() can be a noop as long as vmspaces are type
|
|
|
|
* stable.
|
|
|
|
*/
|
|
|
|
#define PROC_VMSPACE_LOCK(p) do { } while (0)
|
|
|
|
#define PROC_VMSPACE_UNLOCK(p) do { } while (0)
|
|
|
|
|
2007-08-20 12:05:45 +00:00
|
|
|
/*
|
|
|
|
* VM_MAP_RANGE_CHECK: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Asserts that the starting and ending region
|
|
|
|
* addresses fall within the valid range of the map.
|
|
|
|
*/
|
|
|
|
#define VM_MAP_RANGE_CHECK(map, start, end) \
|
|
|
|
{ \
|
|
|
|
if (start < vm_map_min(map)) \
|
|
|
|
start = vm_map_min(map); \
|
|
|
|
if (end > vm_map_max(map)) \
|
|
|
|
end = vm_map_max(map); \
|
|
|
|
if (start > end) \
|
|
|
|
start = end; \
|
|
|
|
}
|
|
|
|
|
2009-10-01 12:48:35 +00:00
|
|
|
/*
|
|
|
|
* vm_map_startup:
|
|
|
|
*
|
|
|
|
* Initialize the vm_map module. Must be called before
|
|
|
|
* any other vm_map routines.
|
|
|
|
*
|
|
|
|
* Map and entry structures are allocated from the general
|
|
|
|
* purpose memory pool with some exceptions:
|
|
|
|
*
|
|
|
|
* - The kernel map and kmem submap are allocated statically.
|
|
|
|
* - Kernel map entries are allocated out of a static pool.
|
|
|
|
*
|
|
|
|
* These restrictions are necessary since malloc() uses the
|
|
|
|
* maps and requires map entries.
|
|
|
|
*/
|
|
|
|
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_startup(void)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2002-12-30 00:41:33 +00:00
|
|
|
mtx_init(&map_sleep_mtx, "vm map sleep mutex", NULL, MTX_DEF);
|
2002-03-19 09:11:49 +00:00
|
|
|
mapzone = uma_zcreate("MAP", sizeof(struct vm_map), NULL,
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
vm_map_zdtor,
|
|
|
|
#else
|
|
|
|
NULL,
|
|
|
|
#endif
|
2013-09-22 17:48:10 +00:00
|
|
|
vm_map_zinit, NULL, UMA_ALIGN_PTR, UMA_ZONE_NOFREE);
|
2002-03-19 09:11:49 +00:00
|
|
|
uma_prealloc(mapzone, MAX_KMAP);
|
2003-11-03 16:14:45 +00:00
|
|
|
kmapentzone = uma_zcreate("KMAP ENTRY", sizeof(struct vm_map_entry),
|
2002-06-17 22:02:41 +00:00
|
|
|
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR,
|
|
|
|
UMA_ZONE_MTXCLASS | UMA_ZONE_VM);
|
2003-11-03 16:14:45 +00:00
|
|
|
mapentzone = uma_zcreate("MAP ENTRY", sizeof(struct vm_map_entry),
|
2002-03-20 04:02:59 +00:00
|
|
|
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, 0);
|
2013-08-07 06:21:20 +00:00
|
|
|
vmspace_zone = uma_zcreate("VMSPACE", sizeof(struct vmspace), NULL,
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
vmspace_zdtor,
|
|
|
|
#else
|
|
|
|
NULL,
|
|
|
|
#endif
|
2013-09-22 17:48:10 +00:00
|
|
|
vmspace_zinit, NULL, UMA_ALIGN_PTR, UMA_ZONE_NOFREE);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
2004-08-02 00:18:36 +00:00
|
|
|
static int
|
|
|
|
vmspace_zinit(void *mem, int size, int flags)
|
2002-03-19 09:11:49 +00:00
|
|
|
{
|
|
|
|
struct vmspace *vm;
|
|
|
|
|
|
|
|
vm = (struct vmspace *)mem;
|
|
|
|
|
2007-11-05 11:36:16 +00:00
|
|
|
vm->vm_map.pmap = NULL;
|
2004-08-02 00:18:36 +00:00
|
|
|
(void)vm_map_zinit(&vm->vm_map, sizeof(vm->vm_map), flags);
|
2013-08-22 18:12:24 +00:00
|
|
|
PMAP_LOCK_INIT(vmspace_pmap(vm));
|
2004-08-02 00:18:36 +00:00
|
|
|
return (0);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
2004-08-02 00:18:36 +00:00
|
|
|
static int
|
|
|
|
vm_map_zinit(void *mem, int size, int flags)
|
2002-03-19 09:11:49 +00:00
|
|
|
{
|
|
|
|
vm_map_t map;
|
|
|
|
|
|
|
|
map = (vm_map_t)mem;
|
2013-07-25 03:48:37 +00:00
|
|
|
memset(map, 0, sizeof(*map));
|
2012-06-27 03:45:25 +00:00
|
|
|
mtx_init(&map->system_mtx, "vm map (system)", NULL, MTX_DEF | MTX_DUPOK);
|
|
|
|
sx_init(&map->lock, "vm map (user)");
|
2004-08-02 00:18:36 +00:00
|
|
|
return (0);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef INVARIANTS
|
|
|
|
static void
|
|
|
|
vmspace_zdtor(void *mem, int size, void *arg)
|
|
|
|
{
|
|
|
|
struct vmspace *vm;
|
|
|
|
|
|
|
|
vm = (struct vmspace *)mem;
|
|
|
|
|
|
|
|
vm_map_zdtor(&vm->vm_map, sizeof(vm->vm_map), arg);
|
|
|
|
}
|
|
|
|
static void
|
|
|
|
vm_map_zdtor(void *mem, int size, void *arg)
|
|
|
|
{
|
|
|
|
vm_map_t map;
|
|
|
|
|
|
|
|
map = (vm_map_t)mem;
|
|
|
|
KASSERT(map->nentries == 0,
|
2003-11-03 16:14:45 +00:00
|
|
|
("map %p nentries == %d on free.",
|
2002-03-19 09:11:49 +00:00
|
|
|
map, map->nentries));
|
|
|
|
KASSERT(map->size == 0,
|
|
|
|
("map %p size == %lu on free.",
|
2002-03-19 11:49:10 +00:00
|
|
|
map, (unsigned long)map->size));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2002-03-19 09:11:49 +00:00
|
|
|
#endif /* INVARIANTS */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a vmspace structure, including a vm_map and pmap,
|
|
|
|
* and initialize those structures. The refcnt is set to 1.
|
2013-09-20 17:06:49 +00:00
|
|
|
*
|
|
|
|
* If 'pinit' is NULL then the embedded pmap is initialized via pmap_pinit().
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
struct vmspace *
|
2013-09-20 17:06:49 +00:00
|
|
|
vmspace_alloc(vm_offset_t min, vm_offset_t max, pmap_pinit_t pinit)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
struct vmspace *vm;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2003-02-19 05:47:46 +00:00
|
|
|
vm = uma_zalloc(vmspace_zone, M_WAITOK);
|
2013-09-20 17:06:49 +00:00
|
|
|
|
|
|
|
KASSERT(vm->vm_map.pmap == NULL, ("vm_map.pmap must be NULL"));
|
|
|
|
|
|
|
|
if (pinit == NULL)
|
|
|
|
pinit = &pmap_pinit;
|
|
|
|
|
|
|
|
if (!pinit(vmspace_pmap(vm))) {
|
2007-11-05 11:36:16 +00:00
|
|
|
uma_zfree(vmspace_zone, vm);
|
|
|
|
return (NULL);
|
|
|
|
}
|
2001-05-23 22:38:00 +00:00
|
|
|
CTR1(KTR_VM, "vmspace_alloc: %p", vm);
|
2010-04-03 19:07:05 +00:00
|
|
|
_vm_map_init(&vm->vm_map, vmspace_pmap(vm), min, max);
|
1994-05-24 10:09:53 +00:00
|
|
|
vm->vm_refcnt = 1;
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
vm->vm_shm = NULL;
|
2004-07-24 07:40:35 +00:00
|
|
|
vm->vm_swrss = 0;
|
|
|
|
vm->vm_tsize = 0;
|
|
|
|
vm->vm_dsize = 0;
|
|
|
|
vm->vm_ssize = 0;
|
|
|
|
vm->vm_taddr = 0;
|
|
|
|
vm->vm_daddr = 0;
|
|
|
|
vm->vm_maxsaddr = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
return (vm);
|
|
|
|
}
|
|
|
|
|
2015-04-29 10:23:02 +00:00
|
|
|
#ifdef RACCT
|
2011-04-05 20:23:59 +00:00
|
|
|
static void
|
|
|
|
vmspace_container_reset(struct proc *p)
|
|
|
|
{
|
|
|
|
|
|
|
|
PROC_LOCK(p);
|
|
|
|
racct_set(p, RACCT_DATA, 0);
|
|
|
|
racct_set(p, RACCT_STACK, 0);
|
|
|
|
racct_set(p, RACCT_RSS, 0);
|
|
|
|
racct_set(p, RACCT_MEMLOCK, 0);
|
|
|
|
racct_set(p, RACCT_VMEM, 0);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
2015-04-29 10:23:02 +00:00
|
|
|
#endif
|
2011-04-05 20:23:59 +00:00
|
|
|
|
2006-03-08 06:31:46 +00:00
|
|
|
static inline void
|
2002-03-10 21:52:48 +00:00
|
|
|
vmspace_dofree(struct vmspace *vm)
|
2002-02-05 21:23:05 +00:00
|
|
|
{
|
2010-04-03 16:20:22 +00:00
|
|
|
|
2002-02-05 21:23:05 +00:00
|
|
|
CTR1(KTR_VM, "vmspace_free: %p", vm);
|
2003-01-13 23:04:32 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure any SysV shm is freed, it might not have been in
|
|
|
|
* exit1().
|
|
|
|
*/
|
|
|
|
shmexit(vm);
|
|
|
|
|
2002-02-05 21:23:05 +00:00
|
|
|
/*
|
|
|
|
* Lock the map, to wait out all other references to it.
|
|
|
|
* Delete all of the mappings and pages they hold, then call
|
|
|
|
* the pmap module to reclaim anything left.
|
|
|
|
*/
|
2005-12-04 22:55:41 +00:00
|
|
|
(void)vm_map_remove(&vm->vm_map, vm->vm_map.min_offset,
|
2002-02-05 21:23:05 +00:00
|
|
|
vm->vm_map.max_offset);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2010-04-03 16:20:22 +00:00
|
|
|
pmap_release(vmspace_pmap(vm));
|
|
|
|
vm->vm_map.pmap = NULL;
|
2002-03-19 09:11:49 +00:00
|
|
|
uma_zfree(vmspace_zone, vm);
|
2002-02-05 21:23:05 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vmspace_free(struct vmspace *vm)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
|
2015-01-24 16:59:38 +00:00
|
|
|
WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL,
|
|
|
|
"vmspace_free() called with non-sleepable lock held");
|
|
|
|
|
1995-02-02 09:09:15 +00:00
|
|
|
if (vm->vm_refcnt == 0)
|
|
|
|
panic("vmspace_free: attempt to free already freed vmspace");
|
|
|
|
|
2010-10-21 17:29:32 +00:00
|
|
|
if (atomic_fetchadd_int(&vm->vm_refcnt, -1) == 1)
|
2002-02-05 21:23:05 +00:00
|
|
|
vmspace_dofree(vm);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vmspace_exitfree(struct proc *p)
|
|
|
|
{
|
2002-04-17 05:26:42 +00:00
|
|
|
struct vmspace *vm;
|
1996-01-19 04:00:31 +00:00
|
|
|
|
2006-05-29 21:28:56 +00:00
|
|
|
PROC_VMSPACE_LOCK(p);
|
2002-12-15 18:50:04 +00:00
|
|
|
vm = p->p_vmspace;
|
|
|
|
p->p_vmspace = NULL;
|
2006-05-29 21:28:56 +00:00
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
KASSERT(vm == &vmspace0, ("vmspace_exitfree: wrong vmspace"));
|
|
|
|
vmspace_free(vm);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vmspace_exit(struct thread *td)
|
|
|
|
{
|
|
|
|
int refcnt;
|
|
|
|
struct vmspace *vm;
|
|
|
|
struct proc *p;
|
2002-12-15 18:50:04 +00:00
|
|
|
|
|
|
|
/*
|
2006-05-29 21:28:56 +00:00
|
|
|
* Release user portion of address space.
|
|
|
|
* This releases references to vnodes,
|
|
|
|
* which could cause I/O if the file has been unlinked.
|
|
|
|
* Need to do this early enough that we can still sleep.
|
2003-11-03 16:14:45 +00:00
|
|
|
*
|
2006-05-29 21:28:56 +00:00
|
|
|
* The last exiting process to reach this point releases as
|
|
|
|
* much of the environment as it can. vmspace_dofree() is the
|
|
|
|
* slower fallback in case another process had a temporary
|
|
|
|
* reference to the vmspace.
|
2002-12-15 18:50:04 +00:00
|
|
|
*/
|
2006-05-29 21:28:56 +00:00
|
|
|
|
|
|
|
p = td->td_proc;
|
|
|
|
vm = p->p_vmspace;
|
|
|
|
atomic_add_int(&vmspace0.vm_refcnt, 1);
|
|
|
|
do {
|
|
|
|
refcnt = vm->vm_refcnt;
|
|
|
|
if (refcnt > 1 && p->p_vmspace != &vmspace0) {
|
|
|
|
/* Switch now since other proc might free vmspace */
|
|
|
|
PROC_VMSPACE_LOCK(p);
|
|
|
|
p->p_vmspace = &vmspace0;
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
pmap_activate(td);
|
|
|
|
}
|
|
|
|
} while (!atomic_cmpset_int(&vm->vm_refcnt, refcnt, refcnt - 1));
|
|
|
|
if (refcnt == 1) {
|
|
|
|
if (p->p_vmspace != vm) {
|
|
|
|
/* vmspace not yet freed, switch back */
|
|
|
|
PROC_VMSPACE_LOCK(p);
|
|
|
|
p->p_vmspace = vm;
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
pmap_activate(td);
|
|
|
|
}
|
|
|
|
pmap_remove_pages(vmspace_pmap(vm));
|
|
|
|
/* Switch now since this proc will free vmspace */
|
|
|
|
PROC_VMSPACE_LOCK(p);
|
|
|
|
p->p_vmspace = &vmspace0;
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
pmap_activate(td);
|
2002-04-17 05:26:42 +00:00
|
|
|
vmspace_dofree(vm);
|
2006-05-29 21:28:56 +00:00
|
|
|
}
|
2015-04-29 10:23:02 +00:00
|
|
|
#ifdef RACCT
|
|
|
|
if (racct_enable)
|
|
|
|
vmspace_container_reset(p);
|
|
|
|
#endif
|
2006-05-29 21:28:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Acquire reference to vmspace owned by another process. */
|
|
|
|
|
|
|
|
struct vmspace *
|
|
|
|
vmspace_acquire_ref(struct proc *p)
|
|
|
|
{
|
|
|
|
struct vmspace *vm;
|
|
|
|
int refcnt;
|
|
|
|
|
|
|
|
PROC_VMSPACE_LOCK(p);
|
|
|
|
vm = p->p_vmspace;
|
|
|
|
if (vm == NULL) {
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
do {
|
|
|
|
refcnt = vm->vm_refcnt;
|
|
|
|
if (refcnt <= 0) { /* Avoid 0->1 transition */
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
} while (!atomic_cmpset_int(&vm->vm_refcnt, refcnt, refcnt + 1));
|
|
|
|
if (vm != p->p_vmspace) {
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
vmspace_free(vm);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
|
|
|
return (vm);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
2002-07-12 23:20:06 +00:00
|
|
|
if (map->system_map)
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_lock_flags_(&map->system_mtx, 0, file, line);
|
2004-07-30 09:10:28 +00:00
|
|
|
else
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_xlock_(&map->lock, file, line);
|
2001-07-04 20:15:18 +00:00
|
|
|
map->timestamp++;
|
|
|
|
}
|
|
|
|
|
2010-09-18 15:03:31 +00:00
|
|
|
static void
|
|
|
|
vm_map_process_deferred(void)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2010-09-18 15:03:31 +00:00
|
|
|
struct thread *td;
|
2012-06-20 18:00:26 +00:00
|
|
|
vm_map_entry_t entry, next;
|
2012-02-23 21:07:16 +00:00
|
|
|
vm_object_t object;
|
2009-02-24 20:57:43 +00:00
|
|
|
|
2010-09-18 15:03:31 +00:00
|
|
|
td = curthread;
|
2012-06-20 18:00:26 +00:00
|
|
|
entry = td->td_map_def_user;
|
|
|
|
td->td_map_def_user = NULL;
|
|
|
|
while (entry != NULL) {
|
|
|
|
next = entry->next;
|
2012-02-23 21:07:16 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_VN_WRITECNT) != 0) {
|
|
|
|
/*
|
|
|
|
* Decrement the object's writemappings and
|
|
|
|
* possibly the vnode's v_writecount.
|
|
|
|
*/
|
|
|
|
KASSERT((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0,
|
|
|
|
("Submap with writecount"));
|
|
|
|
object = entry->object.vm_object;
|
|
|
|
KASSERT(object != NULL, ("No object for writecount"));
|
|
|
|
vnode_pager_release_writecount(object, entry->start,
|
|
|
|
entry->end);
|
|
|
|
}
|
2010-09-18 15:03:31 +00:00
|
|
|
vm_map_entry_deallocate(entry, FALSE);
|
2012-06-20 18:00:26 +00:00
|
|
|
entry = next;
|
2010-09-18 15:03:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
_vm_map_unlock(vm_map_t map, const char *file, int line)
|
|
|
|
{
|
2009-02-24 20:57:43 +00:00
|
|
|
|
2010-09-18 15:03:31 +00:00
|
|
|
if (map->system_map)
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_unlock_flags_(&map->system_mtx, 0, file, line);
|
2010-09-18 15:03:31 +00:00
|
|
|
else {
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_xunlock_(&map->lock, file, line);
|
2010-09-18 15:03:31 +00:00
|
|
|
vm_map_process_deferred();
|
2009-02-24 20:57:43 +00:00
|
|
|
}
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock_read(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
2002-07-12 23:20:06 +00:00
|
|
|
if (map->system_map)
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_lock_flags_(&map->system_mtx, 0, file, line);
|
2004-07-30 09:10:28 +00:00
|
|
|
else
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_slock_(&map->lock, file, line);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_unlock_read(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
2002-12-31 19:38:04 +00:00
|
|
|
if (map->system_map)
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_unlock_flags_(&map->system_mtx, 0, file, line);
|
2010-09-18 15:03:31 +00:00
|
|
|
else {
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_sunlock_(&map->lock, file, line);
|
2010-09-18 15:03:31 +00:00
|
|
|
vm_map_process_deferred();
|
|
|
|
}
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
2002-04-28 06:07:54 +00:00
|
|
|
int
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_trylock(vm_map_t map, const char *file, int line)
|
2002-04-28 06:07:54 +00:00
|
|
|
{
|
2002-03-18 15:08:09 +00:00
|
|
|
int error;
|
|
|
|
|
2002-12-31 19:38:04 +00:00
|
|
|
error = map->system_map ?
|
2011-11-20 16:33:09 +00:00
|
|
|
!mtx_trylock_flags_(&map->system_mtx, 0, file, line) :
|
2011-11-21 12:59:52 +00:00
|
|
|
!sx_try_xlock_(&map->lock, file, line);
|
2002-12-30 00:41:33 +00:00
|
|
|
if (error == 0)
|
|
|
|
map->timestamp++;
|
2002-05-02 17:32:27 +00:00
|
|
|
return (error == 0);
|
2002-03-18 15:08:09 +00:00
|
|
|
}
|
|
|
|
|
2003-03-12 23:13:16 +00:00
|
|
|
int
|
|
|
|
_vm_map_trylock_read(vm_map_t map, const char *file, int line)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = map->system_map ?
|
2011-11-20 16:33:09 +00:00
|
|
|
!mtx_trylock_flags_(&map->system_mtx, 0, file, line) :
|
2011-11-21 12:59:52 +00:00
|
|
|
!sx_try_slock_(&map->lock, file, line);
|
2003-03-12 23:13:16 +00:00
|
|
|
return (error == 0);
|
|
|
|
}
|
|
|
|
|
2009-01-01 00:31:46 +00:00
|
|
|
/*
|
|
|
|
* _vm_map_lock_upgrade: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Tries to upgrade a read (shared) lock on the specified map to a write
|
|
|
|
* (exclusive) lock. Returns the value "0" if the upgrade succeeds and a
|
|
|
|
* non-zero value if the upgrade fails. If the upgrade fails, the map is
|
|
|
|
* returned without a read or write lock held.
|
|
|
|
*
|
|
|
|
* Requires that the map be read locked.
|
|
|
|
*/
|
2002-03-18 15:08:09 +00:00
|
|
|
int
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock_upgrade(vm_map_t map, const char *file, int line)
|
2002-03-18 15:08:09 +00:00
|
|
|
{
|
2009-01-01 00:31:46 +00:00
|
|
|
unsigned int last_timestamp;
|
2002-05-02 17:32:27 +00:00
|
|
|
|
2004-07-30 09:10:28 +00:00
|
|
|
if (map->system_map) {
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_assert_(&map->system_mtx, MA_OWNED, file, line);
|
2009-01-01 00:31:46 +00:00
|
|
|
} else {
|
2011-11-21 12:59:52 +00:00
|
|
|
if (!sx_try_upgrade_(&map->lock, file, line)) {
|
2009-01-01 00:31:46 +00:00
|
|
|
last_timestamp = map->timestamp;
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_sunlock_(&map->lock, file, line);
|
2010-09-18 15:03:31 +00:00
|
|
|
vm_map_process_deferred();
|
2009-01-01 00:31:46 +00:00
|
|
|
/*
|
|
|
|
* If the map's timestamp does not change while the
|
|
|
|
* map is unlocked, then the upgrade succeeds.
|
|
|
|
*/
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_xlock_(&map->lock, file, line);
|
2009-01-01 00:31:46 +00:00
|
|
|
if (last_timestamp != map->timestamp) {
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_xunlock_(&map->lock, file, line);
|
2009-01-01 00:31:46 +00:00
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2002-05-02 17:32:27 +00:00
|
|
|
map->timestamp++;
|
|
|
|
return (0);
|
2001-07-04 20:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-28 23:12:52 +00:00
|
|
|
_vm_map_lock_downgrade(vm_map_t map, const char *file, int line)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
2002-05-02 17:32:27 +00:00
|
|
|
|
2004-07-30 09:10:28 +00:00
|
|
|
if (map->system_map) {
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_assert_(&map->system_mtx, MA_OWNED, file, line);
|
2009-01-01 00:31:46 +00:00
|
|
|
} else
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_downgrade_(&map->lock, file, line);
|
2009-01-01 00:31:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_locked:
|
|
|
|
*
|
|
|
|
* Returns a non-zero value if the caller holds a write (exclusive) lock
|
|
|
|
* on the specified map and the value "0" otherwise.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_map_locked(vm_map_t map)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (map->system_map)
|
|
|
|
return (mtx_owned(&map->system_mtx));
|
|
|
|
else
|
|
|
|
return (sx_xlocked(&map->lock));
|
2002-03-18 15:08:09 +00:00
|
|
|
}
|
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
#ifdef INVARIANTS
|
|
|
|
static void
|
|
|
|
_vm_map_assert_locked(vm_map_t map, const char *file, int line)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (map->system_map)
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_assert_(&map->system_mtx, MA_OWNED, file, line);
|
2009-02-24 20:43:29 +00:00
|
|
|
else
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_assert_(&map->lock, SA_XLOCKED, file, line);
|
2009-02-24 20:43:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#define VM_MAP_ASSERT_LOCKED(map) \
|
|
|
|
_vm_map_assert_locked(map, LOCK_FILE, LOCK_LINE)
|
|
|
|
#else
|
|
|
|
#define VM_MAP_ASSERT_LOCKED(map)
|
|
|
|
#endif
|
|
|
|
|
2002-06-07 18:34:23 +00:00
|
|
|
/*
|
2010-09-19 17:43:22 +00:00
|
|
|
* _vm_map_unlock_and_wait:
|
|
|
|
*
|
|
|
|
* Atomically releases the lock on the specified map and puts the calling
|
|
|
|
* thread to sleep. The calling thread will remain asleep until either
|
|
|
|
* vm_map_wakeup() is performed on the map or the specified timeout is
|
|
|
|
* exceeded.
|
|
|
|
*
|
|
|
|
* WARNING! This function does not perform deferred deallocations of
|
|
|
|
* objects and map entries. Therefore, the calling thread is expected to
|
|
|
|
* reacquire the map lock after reawakening and later perform an ordinary
|
|
|
|
* unlock operation, such as vm_map_unlock(), before completing its
|
|
|
|
* operation on the map.
|
2002-06-07 18:34:23 +00:00
|
|
|
*/
|
2002-07-11 02:39:24 +00:00
|
|
|
int
|
2010-09-19 17:43:22 +00:00
|
|
|
_vm_map_unlock_and_wait(vm_map_t map, int timo, const char *file, int line)
|
2002-06-07 18:34:23 +00:00
|
|
|
{
|
|
|
|
|
2002-12-30 00:41:33 +00:00
|
|
|
mtx_lock(&map_sleep_mtx);
|
2010-09-19 17:43:22 +00:00
|
|
|
if (map->system_map)
|
2011-11-20 16:33:09 +00:00
|
|
|
mtx_unlock_flags_(&map->system_mtx, 0, file, line);
|
2010-09-19 17:43:22 +00:00
|
|
|
else
|
2011-11-21 12:59:52 +00:00
|
|
|
sx_xunlock_(&map->lock, file, line);
|
2010-09-19 17:43:22 +00:00
|
|
|
return (msleep(&map->root, &map_sleep_mtx, PDROP | PVM, "vmmaps",
|
|
|
|
timo));
|
2002-06-07 18:34:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_wakeup:
|
2010-09-19 17:43:22 +00:00
|
|
|
*
|
|
|
|
* Awaken any threads that have slept on the map using
|
|
|
|
* vm_map_unlock_and_wait().
|
2002-06-07 18:34:23 +00:00
|
|
|
*/
|
2002-07-11 02:39:24 +00:00
|
|
|
void
|
2002-06-07 18:34:23 +00:00
|
|
|
vm_map_wakeup(vm_map_t map)
|
|
|
|
{
|
|
|
|
|
2002-06-17 13:27:40 +00:00
|
|
|
/*
|
2002-12-30 00:41:33 +00:00
|
|
|
* Acquire and release map_sleep_mtx to prevent a wakeup()
|
2010-09-19 17:43:22 +00:00
|
|
|
* from being performed (and lost) between the map unlock
|
|
|
|
* and the msleep() in _vm_map_unlock_and_wait().
|
2002-06-17 13:27:40 +00:00
|
|
|
*/
|
2002-12-30 00:41:33 +00:00
|
|
|
mtx_lock(&map_sleep_mtx);
|
|
|
|
mtx_unlock(&map_sleep_mtx);
|
2002-06-07 18:34:23 +00:00
|
|
|
wakeup(&map->root);
|
|
|
|
}
|
|
|
|
|
2010-12-09 21:02:22 +00:00
|
|
|
void
|
|
|
|
vm_map_busy(vm_map_t map)
|
|
|
|
{
|
|
|
|
|
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
map->busy++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_map_unbusy(vm_map_t map)
|
|
|
|
{
|
|
|
|
|
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
KASSERT(map->busy, ("vm_map_unbusy: not busy"));
|
|
|
|
if (--map->busy == 0 && (map->flags & MAP_BUSY_WAKEUP)) {
|
|
|
|
vm_map_modflags(map, 0, MAP_BUSY_WAKEUP);
|
|
|
|
wakeup(&map->busy);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
vm_map_wait_busy(vm_map_t map)
|
|
|
|
{
|
|
|
|
|
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
while (map->busy) {
|
|
|
|
vm_map_modflags(map, MAP_BUSY_WAKEUP, 0);
|
|
|
|
if (map->system_map)
|
|
|
|
msleep(&map->busy, &map->system_mtx, 0, "mbusy", 0);
|
|
|
|
else
|
|
|
|
sx_sleep(&map->busy, &map->lock, 0, "mbusy", 0);
|
|
|
|
}
|
|
|
|
map->timestamp++;
|
|
|
|
}
|
|
|
|
|
2001-07-04 20:15:18 +00:00
|
|
|
long
|
|
|
|
vmspace_resident_count(struct vmspace *vmspace)
|
|
|
|
{
|
|
|
|
return pmap_resident_count(vmspace_pmap(vmspace));
|
2003-10-06 01:47:12 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_create:
|
|
|
|
*
|
|
|
|
* Creates and returns a new empty VM map with
|
|
|
|
* the given physical map structure, and having
|
|
|
|
* the given lower and upper address bounds.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
vm_map_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_create(pmap_t pmap, vm_offset_t min, vm_offset_t max)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_t result;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2003-02-19 05:47:46 +00:00
|
|
|
result = uma_zalloc(mapzone, M_WAITOK);
|
2001-05-23 22:38:00 +00:00
|
|
|
CTR1(KTR_VM, "vm_map_create: %p", result);
|
2010-04-03 19:07:05 +00:00
|
|
|
_vm_map_init(result, pmap, min, max);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (result);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize an existing vm_map structure
|
|
|
|
* such as that in the vmspace structure.
|
|
|
|
*/
|
2002-03-19 09:11:49 +00:00
|
|
|
static void
|
2010-04-03 19:07:05 +00:00
|
|
|
_vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-05-23 22:38:00 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
map->header.next = map->header.prev = &map->header;
|
2002-07-11 02:39:24 +00:00
|
|
|
map->needs_wakeup = FALSE;
|
1997-08-05 00:02:08 +00:00
|
|
|
map->system_map = 0;
|
2010-04-03 19:07:05 +00:00
|
|
|
map->pmap = pmap;
|
1994-05-24 10:09:53 +00:00
|
|
|
map->min_offset = min;
|
|
|
|
map->max_offset = max;
|
2004-05-07 00:17:07 +00:00
|
|
|
map->flags = 0;
|
2002-05-24 01:33:24 +00:00
|
|
|
map->root = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
map->timestamp = 0;
|
2010-12-09 21:02:22 +00:00
|
|
|
map->busy = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2000-10-04 01:29:17 +00:00
|
|
|
void
|
2010-04-03 19:07:05 +00:00
|
|
|
vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max)
|
2000-10-04 01:29:17 +00:00
|
|
|
{
|
2010-04-03 19:07:05 +00:00
|
|
|
|
|
|
|
_vm_map_init(map, pmap, min, max);
|
2003-01-25 18:45:55 +00:00
|
|
|
mtx_init(&map->system_mtx, "system map", NULL, MTX_DEF | MTX_DUPOK);
|
2004-07-30 09:10:28 +00:00
|
|
|
sx_init(&map->lock, "user map");
|
2000-10-04 01:29:17 +00:00
|
|
|
}
|
|
|
|
|
1996-05-18 03:38:05 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_dispose: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Inverse of vm_map_entry_create.
|
|
|
|
*/
|
1996-12-07 06:19:37 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry)
|
1996-05-18 03:38:05 +00:00
|
|
|
{
|
2002-06-10 06:11:45 +00:00
|
|
|
uma_zfree(map->system_map ? kmapentzone : mapentzone, entry);
|
1996-05-18 03:38:05 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_create: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Allocates a VM map entry for insertion.
|
2001-04-12 21:50:03 +00:00
|
|
|
* No entry fields are filled in.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-12-14 09:55:16 +00:00
|
|
|
static vm_map_entry_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_create(vm_map_t map)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2000-02-16 21:11:33 +00:00
|
|
|
vm_map_entry_t new_entry;
|
|
|
|
|
2002-06-10 06:11:45 +00:00
|
|
|
if (map->system_map)
|
|
|
|
new_entry = uma_zalloc(kmapentzone, M_NOWAIT);
|
|
|
|
else
|
2003-02-19 05:47:46 +00:00
|
|
|
new_entry = uma_zalloc(mapentzone, M_WAITOK);
|
2000-02-16 21:11:33 +00:00
|
|
|
if (new_entry == NULL)
|
2002-06-10 06:11:45 +00:00
|
|
|
panic("vm_map_entry_create: kernel resources exhausted");
|
2002-03-10 21:52:48 +00:00
|
|
|
return (new_entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2002-06-01 16:59:30 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_set_behavior:
|
|
|
|
*
|
|
|
|
* Set the expected access behavior, either normal, random, or
|
|
|
|
* sequential.
|
|
|
|
*/
|
2006-03-08 06:31:46 +00:00
|
|
|
static inline void
|
2002-06-01 16:59:30 +00:00
|
|
|
vm_map_entry_set_behavior(vm_map_entry_t entry, u_char behavior)
|
|
|
|
{
|
|
|
|
entry->eflags = (entry->eflags & ~MAP_ENTRY_BEHAV_MASK) |
|
|
|
|
(behavior & MAP_ENTRY_BEHAV_MASK);
|
|
|
|
}
|
|
|
|
|
2004-08-13 08:06:34 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_set_max_free:
|
|
|
|
*
|
|
|
|
* Set the max_free field in a vm_map_entry.
|
|
|
|
*/
|
2006-03-08 06:31:46 +00:00
|
|
|
static inline void
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_set_max_free(vm_map_entry_t entry)
|
|
|
|
{
|
|
|
|
|
|
|
|
entry->max_free = entry->adj_free;
|
|
|
|
if (entry->left != NULL && entry->left->max_free > entry->max_free)
|
|
|
|
entry->max_free = entry->left->max_free;
|
|
|
|
if (entry->right != NULL && entry->right->max_free > entry->max_free)
|
|
|
|
entry->max_free = entry->right->max_free;
|
|
|
|
}
|
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_splay:
|
|
|
|
*
|
2004-08-13 08:06:34 +00:00
|
|
|
* The Sleator and Tarjan top-down splay algorithm with the
|
|
|
|
* following variation. Max_free must be computed bottom-up, so
|
|
|
|
* on the downward pass, maintain the left and right spines in
|
|
|
|
* reverse order. Then, make a second pass up each side to fix
|
|
|
|
* the pointers and compute max_free. The time bound is O(log n)
|
|
|
|
* amortized.
|
|
|
|
*
|
|
|
|
* The new root is the vm_map_entry containing "addr", or else an
|
|
|
|
* adjacent entry (lower or higher) if addr is not in the tree.
|
|
|
|
*
|
|
|
|
* The map must be locked, and leaves it so.
|
|
|
|
*
|
|
|
|
* Returns: the new root.
|
2002-05-24 01:33:24 +00:00
|
|
|
*/
|
|
|
|
static vm_map_entry_t
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_splay(vm_offset_t addr, vm_map_entry_t root)
|
2002-05-24 01:33:24 +00:00
|
|
|
{
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_t llist, rlist;
|
|
|
|
vm_map_entry_t ltree, rtree;
|
|
|
|
vm_map_entry_t y;
|
2002-05-24 01:33:24 +00:00
|
|
|
|
2004-08-13 08:06:34 +00:00
|
|
|
/* Special case of empty tree. */
|
2002-05-24 01:33:24 +00:00
|
|
|
if (root == NULL)
|
|
|
|
return (root);
|
2004-08-13 08:06:34 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Pass One: Splay down the tree until we find addr or a NULL
|
|
|
|
* pointer where addr would go. llist and rlist are the two
|
|
|
|
* sides in reverse order (bottom-up), with llist linked by
|
|
|
|
* the right pointer and rlist linked by the left pointer in
|
|
|
|
* the vm_map_entry. Wait until Pass Two to set max_free on
|
|
|
|
* the two spines.
|
|
|
|
*/
|
|
|
|
llist = NULL;
|
|
|
|
rlist = NULL;
|
|
|
|
for (;;) {
|
|
|
|
/* root is never NULL in here. */
|
|
|
|
if (addr < root->start) {
|
|
|
|
y = root->left;
|
|
|
|
if (y == NULL)
|
2002-05-24 01:33:24 +00:00
|
|
|
break;
|
2004-08-13 08:06:34 +00:00
|
|
|
if (addr < y->start && y->left != NULL) {
|
|
|
|
/* Rotate right and put y on rlist. */
|
2002-05-24 01:33:24 +00:00
|
|
|
root->left = y->right;
|
|
|
|
y->right = root;
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_set_max_free(root);
|
|
|
|
root = y->left;
|
|
|
|
y->left = rlist;
|
|
|
|
rlist = y;
|
|
|
|
} else {
|
|
|
|
/* Put root on rlist. */
|
|
|
|
root->left = rlist;
|
|
|
|
rlist = root;
|
2002-05-24 01:33:24 +00:00
|
|
|
root = y;
|
|
|
|
}
|
2008-12-30 21:52:18 +00:00
|
|
|
} else if (addr >= root->end) {
|
2004-08-13 08:06:34 +00:00
|
|
|
y = root->right;
|
2008-12-30 21:52:18 +00:00
|
|
|
if (y == NULL)
|
2002-05-24 01:33:24 +00:00
|
|
|
break;
|
2004-08-13 08:06:34 +00:00
|
|
|
if (addr >= y->end && y->right != NULL) {
|
|
|
|
/* Rotate left and put y on llist. */
|
2002-05-24 01:33:24 +00:00
|
|
|
root->right = y->left;
|
|
|
|
y->left = root;
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_set_max_free(root);
|
|
|
|
root = y->right;
|
|
|
|
y->right = llist;
|
|
|
|
llist = y;
|
|
|
|
} else {
|
|
|
|
/* Put root on llist. */
|
|
|
|
root->right = llist;
|
|
|
|
llist = root;
|
2002-05-24 01:33:24 +00:00
|
|
|
root = y;
|
|
|
|
}
|
2008-12-30 21:52:18 +00:00
|
|
|
} else
|
|
|
|
break;
|
2002-05-24 01:33:24 +00:00
|
|
|
}
|
2004-08-13 08:06:34 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Pass Two: Walk back up the two spines, flip the pointers
|
|
|
|
* and set max_free. The subtrees of the root go at the
|
|
|
|
* bottom of llist and rlist.
|
|
|
|
*/
|
|
|
|
ltree = root->left;
|
|
|
|
while (llist != NULL) {
|
|
|
|
y = llist->right;
|
|
|
|
llist->right = ltree;
|
|
|
|
vm_map_entry_set_max_free(llist);
|
|
|
|
ltree = llist;
|
|
|
|
llist = y;
|
|
|
|
}
|
|
|
|
rtree = root->right;
|
|
|
|
while (rlist != NULL) {
|
|
|
|
y = rlist->left;
|
|
|
|
rlist->left = rtree;
|
|
|
|
vm_map_entry_set_max_free(rlist);
|
|
|
|
rtree = rlist;
|
|
|
|
rlist = y;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Final assembly: add ltree and rtree as subtrees of root.
|
|
|
|
*/
|
|
|
|
root->left = ltree;
|
|
|
|
root->right = rtree;
|
|
|
|
vm_map_entry_set_max_free(root);
|
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
return (root);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_{un,}link:
|
|
|
|
*
|
|
|
|
* Insert/remove entries from maps.
|
|
|
|
*/
|
2002-05-24 01:33:24 +00:00
|
|
|
static void
|
1999-03-21 23:37:00 +00:00
|
|
|
vm_map_entry_link(vm_map_t map,
|
|
|
|
vm_map_entry_t after_where,
|
|
|
|
vm_map_entry_t entry)
|
|
|
|
{
|
2001-05-23 22:38:00 +00:00
|
|
|
|
|
|
|
CTR4(KTR_VM,
|
|
|
|
"vm_map_entry_link: map %p, nentries %d, entry %p, after %p", map,
|
|
|
|
map->nentries, entry, after_where);
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
2014-06-20 07:01:53 +00:00
|
|
|
KASSERT(after_where == &map->header ||
|
|
|
|
after_where->end <= entry->start,
|
|
|
|
("vm_map_entry_link: prev end %jx new start %jx overlap",
|
|
|
|
(uintmax_t)after_where->end, (uintmax_t)entry->start));
|
|
|
|
KASSERT(after_where->next == &map->header ||
|
|
|
|
entry->end <= after_where->next->start,
|
|
|
|
("vm_map_entry_link: new end %jx next start %jx overlap",
|
|
|
|
(uintmax_t)entry->end, (uintmax_t)after_where->next->start));
|
|
|
|
|
1999-03-21 23:37:00 +00:00
|
|
|
map->nentries++;
|
|
|
|
entry->prev = after_where;
|
|
|
|
entry->next = after_where->next;
|
|
|
|
entry->next->prev = entry;
|
|
|
|
after_where->next = entry;
|
2002-05-24 01:33:24 +00:00
|
|
|
|
|
|
|
if (after_where != &map->header) {
|
|
|
|
if (after_where != map->root)
|
|
|
|
vm_map_entry_splay(after_where->start, map->root);
|
|
|
|
entry->right = after_where->right;
|
|
|
|
entry->left = after_where;
|
|
|
|
after_where->right = NULL;
|
2004-08-13 08:06:34 +00:00
|
|
|
after_where->adj_free = entry->start - after_where->end;
|
|
|
|
vm_map_entry_set_max_free(after_where);
|
2002-05-24 01:33:24 +00:00
|
|
|
} else {
|
|
|
|
entry->right = map->root;
|
|
|
|
entry->left = NULL;
|
|
|
|
}
|
2004-08-13 08:06:34 +00:00
|
|
|
entry->adj_free = (entry->next == &map->header ? map->max_offset :
|
|
|
|
entry->next->start) - entry->end;
|
|
|
|
vm_map_entry_set_max_free(entry);
|
2002-05-24 01:33:24 +00:00
|
|
|
map->root = entry;
|
1999-03-21 23:37:00 +00:00
|
|
|
}
|
|
|
|
|
2002-05-24 01:33:24 +00:00
|
|
|
static void
|
1999-03-21 23:37:00 +00:00
|
|
|
vm_map_entry_unlink(vm_map_t map,
|
|
|
|
vm_map_entry_t entry)
|
|
|
|
{
|
2002-05-24 01:33:24 +00:00
|
|
|
vm_map_entry_t next, prev, root;
|
1999-03-21 23:37:00 +00:00
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
2002-05-24 01:33:24 +00:00
|
|
|
if (entry != map->root)
|
|
|
|
vm_map_entry_splay(entry->start, map->root);
|
|
|
|
if (entry->left == NULL)
|
|
|
|
root = entry->right;
|
|
|
|
else {
|
|
|
|
root = vm_map_entry_splay(entry->start, entry->left);
|
|
|
|
root->right = entry->right;
|
2004-08-13 08:06:34 +00:00
|
|
|
root->adj_free = (entry->next == &map->header ? map->max_offset :
|
|
|
|
entry->next->start) - root->end;
|
|
|
|
vm_map_entry_set_max_free(root);
|
2002-05-24 01:33:24 +00:00
|
|
|
}
|
|
|
|
map->root = root;
|
|
|
|
|
|
|
|
prev = entry->prev;
|
|
|
|
next = entry->next;
|
1999-03-21 23:37:00 +00:00
|
|
|
next->prev = prev;
|
|
|
|
prev->next = next;
|
|
|
|
map->nentries--;
|
2001-05-23 22:38:00 +00:00
|
|
|
CTR3(KTR_VM, "vm_map_entry_unlink: map %p, nentries %d, entry %p", map,
|
|
|
|
map->nentries, entry);
|
1999-03-21 23:37:00 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2004-08-13 08:06:34 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_resize_free:
|
|
|
|
*
|
|
|
|
* Recompute the amount of free space following a vm_map_entry
|
|
|
|
* and propagate that value up the tree. Call this function after
|
|
|
|
* resizing a map entry in-place, that is, without a call to
|
|
|
|
* vm_map_entry_link() or _unlink().
|
|
|
|
*
|
|
|
|
* The map must be locked, and leaves it so.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vm_map_entry_resize_free(vm_map_t map, vm_map_entry_t entry)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Using splay trees without parent pointers, propagating
|
|
|
|
* max_free up the tree is done by moving the entry to the
|
|
|
|
* root and making the change there.
|
|
|
|
*/
|
|
|
|
if (entry != map->root)
|
|
|
|
map->root = vm_map_entry_splay(entry->start, map->root);
|
|
|
|
|
|
|
|
entry->adj_free = (entry->next == &map->header ? map->max_offset :
|
|
|
|
entry->next->start) - entry->end;
|
|
|
|
vm_map_entry_set_max_free(entry);
|
|
|
|
}
|
|
|
|
|
1996-03-28 04:53:28 +00:00
|
|
|
/*
|
|
|
|
* vm_map_lookup_entry: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Finds the map entry containing (or
|
|
|
|
* immediately preceding) the specified address
|
|
|
|
* in the given map; the entry is returned
|
|
|
|
* in the "entry" parameter. The boolean
|
|
|
|
* result indicates whether the address is
|
|
|
|
* actually contained in the map.
|
|
|
|
*/
|
|
|
|
boolean_t
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_lookup_entry(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t address,
|
|
|
|
vm_map_entry_t *entry) /* OUT */
|
1996-03-28 04:53:28 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t cur;
|
2009-01-01 00:31:46 +00:00
|
|
|
boolean_t locked;
|
1996-03-28 04:53:28 +00:00
|
|
|
|
2008-12-30 19:48:03 +00:00
|
|
|
/*
|
|
|
|
* If the map is empty, then the map entry immediately preceding
|
|
|
|
* "address" is the map's header.
|
|
|
|
*/
|
|
|
|
cur = map->root;
|
2002-05-24 01:33:24 +00:00
|
|
|
if (cur == NULL)
|
|
|
|
*entry = &map->header;
|
2008-12-30 19:48:03 +00:00
|
|
|
else if (address >= cur->start && cur->end > address) {
|
|
|
|
*entry = cur;
|
|
|
|
return (TRUE);
|
2009-01-01 00:31:46 +00:00
|
|
|
} else if ((locked = vm_map_locked(map)) ||
|
|
|
|
sx_try_upgrade(&map->lock)) {
|
|
|
|
/*
|
|
|
|
* Splay requires a write lock on the map. However, it only
|
|
|
|
* restructures the binary search tree; it does not otherwise
|
|
|
|
* change the map. Thus, the map's timestamp need not change
|
|
|
|
* on a temporary upgrade.
|
|
|
|
*/
|
2008-12-30 19:48:03 +00:00
|
|
|
map->root = cur = vm_map_entry_splay(address, cur);
|
2009-01-01 00:31:46 +00:00
|
|
|
if (!locked)
|
|
|
|
sx_downgrade(&map->lock);
|
1996-03-28 04:53:28 +00:00
|
|
|
|
2008-12-30 19:48:03 +00:00
|
|
|
/*
|
|
|
|
* If "address" is contained within a map entry, the new root
|
|
|
|
* is that map entry. Otherwise, the new root is a map entry
|
|
|
|
* immediately before or after "address".
|
|
|
|
*/
|
2002-05-24 01:33:24 +00:00
|
|
|
if (address >= cur->start) {
|
1996-03-28 04:53:28 +00:00
|
|
|
*entry = cur;
|
2002-05-24 01:33:24 +00:00
|
|
|
if (cur->end > address)
|
1996-03-28 04:53:28 +00:00
|
|
|
return (TRUE);
|
2002-05-24 01:33:24 +00:00
|
|
|
} else
|
|
|
|
*entry = cur->prev;
|
2009-01-01 00:31:46 +00:00
|
|
|
} else
|
|
|
|
/*
|
|
|
|
* Since the map is only locked for read access, perform a
|
|
|
|
* standard binary search tree lookup for "address".
|
|
|
|
*/
|
|
|
|
for (;;) {
|
|
|
|
if (address < cur->start) {
|
|
|
|
if (cur->left == NULL) {
|
|
|
|
*entry = cur->prev;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
cur = cur->left;
|
|
|
|
} else if (cur->end > address) {
|
|
|
|
*entry = cur;
|
|
|
|
return (TRUE);
|
|
|
|
} else {
|
|
|
|
if (cur->right == NULL) {
|
|
|
|
*entry = cur;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
cur = cur->right;
|
|
|
|
}
|
|
|
|
}
|
1996-03-28 04:53:28 +00:00
|
|
|
return (FALSE);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_insert:
|
|
|
|
*
|
|
|
|
* Inserts the given whole VM object into the target
|
|
|
|
* map at the specified address range. The object's
|
|
|
|
* size should match that of the address range.
|
|
|
|
*
|
|
|
|
* Requires that the map be locked, and leaves it so.
|
1999-02-12 09:51:43 +00:00
|
|
|
*
|
|
|
|
* If object is non-NULL, ref count must be bumped by caller
|
|
|
|
* prior to making call to account for the new entry.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_insert(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
|
2014-06-16 16:37:41 +00:00
|
|
|
vm_offset_t start, vm_offset_t end, vm_prot_t prot, vm_prot_t max, int cow)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2014-06-16 16:37:41 +00:00
|
|
|
vm_map_entry_t new_entry, prev_entry, temp_entry;
|
2000-02-28 04:10:35 +00:00
|
|
|
vm_eflags_t protoeflags;
|
2010-12-02 17:37:16 +00:00
|
|
|
struct ucred *cred;
|
2012-02-11 17:29:07 +00:00
|
|
|
vm_inherit_t inheritance;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
2014-06-16 16:37:41 +00:00
|
|
|
KASSERT((object != kmem_object && object != kernel_object) ||
|
|
|
|
(cow & MAP_COPY_ON_WRITE) == 0,
|
|
|
|
("vm_map_insert: kmem or kernel object and COW"));
|
|
|
|
KASSERT(object == NULL || (cow & MAP_NOFAULT) == 0,
|
|
|
|
("vm_map_insert: paradoxical MAP_NOFAULT request"));
|
2009-02-24 20:43:29 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Check that the start and end points are not bogus.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if ((start < map->min_offset) || (end > map->max_offset) ||
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(start >= end))
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Find the entry prior to the proposed starting address; if it's part
|
|
|
|
* of an existing entry, this range is bogus.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if (vm_map_lookup_entry(map, start, &temp_entry))
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
prev_entry = temp_entry;
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Assert that the next entry doesn't overlap the end point.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
if ((prev_entry->next != &map->header) &&
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(prev_entry->next->start < end))
|
|
|
|
return (KERN_NO_SPACE);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1997-01-16 04:16:22 +00:00
|
|
|
protoeflags = 0;
|
|
|
|
if (cow & MAP_COPY_ON_WRITE)
|
2014-06-16 16:37:41 +00:00
|
|
|
protoeflags |= MAP_ENTRY_COW | MAP_ENTRY_NEEDS_COPY;
|
|
|
|
if (cow & MAP_NOFAULT)
|
1997-01-16 04:16:22 +00:00
|
|
|
protoeflags |= MAP_ENTRY_NOFAULT;
|
1999-12-12 03:19:33 +00:00
|
|
|
if (cow & MAP_DISABLE_SYNCER)
|
|
|
|
protoeflags |= MAP_ENTRY_NOSYNC;
|
2000-02-28 04:10:35 +00:00
|
|
|
if (cow & MAP_DISABLE_COREDUMP)
|
|
|
|
protoeflags |= MAP_ENTRY_NOCOREDUMP;
|
2014-06-19 16:26:16 +00:00
|
|
|
if (cow & MAP_STACK_GROWS_DOWN)
|
|
|
|
protoeflags |= MAP_ENTRY_GROWS_DOWN;
|
|
|
|
if (cow & MAP_STACK_GROWS_UP)
|
|
|
|
protoeflags |= MAP_ENTRY_GROWS_UP;
|
2012-02-23 21:07:16 +00:00
|
|
|
if (cow & MAP_VN_WRITECOUNT)
|
|
|
|
protoeflags |= MAP_ENTRY_VN_WRITECNT;
|
2012-02-11 17:29:07 +00:00
|
|
|
if (cow & MAP_INHERIT_SHARE)
|
|
|
|
inheritance = VM_INHERIT_SHARE;
|
|
|
|
else
|
|
|
|
inheritance = VM_INHERIT_DEFAULT;
|
1999-12-12 03:19:33 +00:00
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
if (cow & (MAP_ACC_NO_CHARGE | MAP_NOFAULT))
|
|
|
|
goto charged;
|
|
|
|
if ((cow & MAP_ACC_CHARGED) || ((prot & VM_PROT_WRITE) &&
|
|
|
|
((protoeflags & MAP_ENTRY_NEEDS_COPY) || object == NULL))) {
|
|
|
|
if (!(cow & MAP_ACC_CHARGED) && !swap_reserve(end - start))
|
|
|
|
return (KERN_RESOURCE_SHORTAGE);
|
2010-01-29 19:25:45 +00:00
|
|
|
KASSERT(object == NULL || (protoeflags & MAP_ENTRY_NEEDS_COPY) ||
|
2010-12-02 17:37:16 +00:00
|
|
|
object->cred == NULL,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
("OVERCOMMIT: vm_map_insert o %p", object));
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = curthread->td_ucred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
charged:
|
2010-10-04 16:49:40 +00:00
|
|
|
/* Expand the kernel pmap, if necessary. */
|
|
|
|
if (map == kernel_map && end > kernel_vm_end)
|
|
|
|
pmap_growkernel(end);
|
2003-04-20 21:56:40 +00:00
|
|
|
if (object != NULL) {
|
1999-02-12 09:51:43 +00:00
|
|
|
/*
|
2003-04-20 21:56:40 +00:00
|
|
|
* OBJ_ONEMAPPING must be cleared unless this mapping
|
|
|
|
* is trivially proven to be the only mapping for any
|
|
|
|
* of the object's pages. (Object granularity
|
|
|
|
* reference counting is insufficient to recognize
|
2003-11-03 16:14:45 +00:00
|
|
|
* aliases with precision.)
|
1999-02-12 09:51:43 +00:00
|
|
|
*/
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2003-04-20 21:56:40 +00:00
|
|
|
if (object->ref_count > 1 || object->shadow_count != 0)
|
1999-02-12 09:51:43 +00:00
|
|
|
vm_object_clear_flag(object, OBJ_ONEMAPPING);
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
1999-05-18 05:38:48 +00:00
|
|
|
}
|
|
|
|
else if ((prev_entry != &map->header) &&
|
|
|
|
(prev_entry->eflags == protoeflags) &&
|
2014-06-23 07:03:47 +00:00
|
|
|
(cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0 &&
|
1999-05-18 05:38:48 +00:00
|
|
|
(prev_entry->end == start) &&
|
|
|
|
(prev_entry->wired_count == 0) &&
|
2010-12-02 17:37:16 +00:00
|
|
|
(prev_entry->cred == cred ||
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
(prev_entry->object.vm_object != NULL &&
|
2010-12-02 17:37:16 +00:00
|
|
|
(prev_entry->object.vm_object->cred == cred))) &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_object_coalesce(prev_entry->object.vm_object,
|
|
|
|
prev_entry->offset,
|
|
|
|
(vm_size_t)(prev_entry->end - prev_entry->start),
|
2014-06-26 16:04:03 +00:00
|
|
|
(vm_size_t)(end - prev_entry->end), cred != NULL &&
|
|
|
|
(protoeflags & MAP_ENTRY_NEEDS_COPY) == 0)) {
|
1999-05-18 05:38:48 +00:00
|
|
|
/*
|
|
|
|
* We were able to extend the object. Determine if we
|
2003-11-03 16:14:45 +00:00
|
|
|
* can extend the previous map entry to include the
|
1999-05-18 05:38:48 +00:00
|
|
|
* new range as well.
|
|
|
|
*/
|
2012-02-11 17:29:07 +00:00
|
|
|
if ((prev_entry->inheritance == inheritance) &&
|
1999-05-18 05:38:48 +00:00
|
|
|
(prev_entry->protection == prot) &&
|
|
|
|
(prev_entry->max_protection == max)) {
|
|
|
|
map->size += (end - prev_entry->end);
|
|
|
|
prev_entry->end = end;
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_resize_free(map, prev_entry);
|
2001-02-04 06:19:28 +00:00
|
|
|
vm_map_simplify_entry(map, prev_entry);
|
1999-05-18 05:38:48 +00:00
|
|
|
return (KERN_SUCCESS);
|
1996-05-18 03:38:05 +00:00
|
|
|
}
|
1999-05-18 05:38:48 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can extend the object but cannot extend the
|
|
|
|
* map entry, we have to create a new map entry. We
|
|
|
|
* must bump the ref count on the extended object to
|
2001-02-04 06:19:28 +00:00
|
|
|
* account for it. object may be NULL.
|
1999-05-18 05:38:48 +00:00
|
|
|
*/
|
|
|
|
object = prev_entry->object.vm_object;
|
|
|
|
offset = prev_entry->offset +
|
|
|
|
(prev_entry->end - prev_entry->start);
|
|
|
|
vm_object_reference(object);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (cred != NULL && object != NULL && object->cred != NULL &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
!(prev_entry->eflags & MAP_ENTRY_NEEDS_COPY)) {
|
|
|
|
/* Object already accounts for this uid. */
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2014-06-26 16:04:03 +00:00
|
|
|
if (cred != NULL)
|
|
|
|
crhold(cred);
|
1996-12-31 16:23:38 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Create a new entry
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(map);
|
|
|
|
new_entry->start = start;
|
|
|
|
new_entry->end = end;
|
2010-12-02 17:37:16 +00:00
|
|
|
new_entry->cred = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1997-01-16 04:16:22 +00:00
|
|
|
new_entry->eflags = protoeflags;
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry->object.vm_object = object;
|
|
|
|
new_entry->offset = offset;
|
1999-01-06 23:05:42 +00:00
|
|
|
new_entry->avail_ssize = 0;
|
|
|
|
|
2012-02-11 17:29:07 +00:00
|
|
|
new_entry->inheritance = inheritance;
|
1999-03-02 05:43:18 +00:00
|
|
|
new_entry->protection = prot;
|
|
|
|
new_entry->max_protection = max;
|
|
|
|
new_entry->wired_count = 0;
|
2014-03-21 13:55:57 +00:00
|
|
|
new_entry->wiring_thread = NULL;
|
2012-05-10 15:16:42 +00:00
|
|
|
new_entry->read_ahead = VM_FAULT_READ_AHEAD_INIT;
|
|
|
|
new_entry->next_read = OFF_TO_IDX(offset);
|
1999-03-02 05:43:18 +00:00
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
KASSERT(cred == NULL || !ENTRY_CHARGED(new_entry),
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
("OVERCOMMIT: vm_map_insert leaks vm_map %p", new_entry));
|
2010-12-02 17:37:16 +00:00
|
|
|
new_entry->cred = cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Insert the new entry into the list
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_entry_link(map, prev_entry, new_entry);
|
|
|
|
map->size += new_entry->end - new_entry->start;
|
|
|
|
|
2001-02-04 06:19:28 +00:00
|
|
|
/*
|
2014-06-25 03:30:03 +00:00
|
|
|
* Try to coalesce the new entry with both the previous and next
|
|
|
|
* entries in the list. Previously, we only attempted to coalesce
|
|
|
|
* with the previous entry when object is NULL. Here, we handle the
|
|
|
|
* other cases, which are less common.
|
2001-02-04 06:19:28 +00:00
|
|
|
*/
|
2014-06-25 03:30:03 +00:00
|
|
|
vm_map_simplify_entry(map, new_entry);
|
2001-02-04 06:19:28 +00:00
|
|
|
|
1999-12-12 03:19:33 +00:00
|
|
|
if (cow & (MAP_PREFAULT|MAP_PREFAULT_PARTIAL)) {
|
2004-04-24 03:46:44 +00:00
|
|
|
vm_map_pmap_enter(map, start, prot,
|
1999-05-17 00:53:56 +00:00
|
|
|
object, OFF_TO_IDX(offset), end - start,
|
|
|
|
cow & MAP_PREFAULT_PARTIAL);
|
1999-12-12 03:19:33 +00:00
|
|
|
}
|
1999-05-17 00:53:56 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2004-08-13 08:06:34 +00:00
|
|
|
* vm_map_findspace:
|
|
|
|
*
|
|
|
|
* Find the first fit (lowest VM address) for "length" free bytes
|
|
|
|
* beginning at address >= start in the given map.
|
|
|
|
*
|
|
|
|
* In a vm_map_entry, "adj_free" is the amount of free space
|
|
|
|
* adjacent (higher address) to this entry, and "max_free" is the
|
|
|
|
* maximum amount of contiguous free space in its subtree. This
|
|
|
|
* allows finding a free region in one path down the tree, so
|
|
|
|
* O(log n) amortized with splay trees.
|
|
|
|
*
|
|
|
|
* The map must be locked, and leaves it so.
|
|
|
|
*
|
|
|
|
* Returns: 0 on success, and starting address in *addr,
|
|
|
|
* 1 if insufficient space.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
int
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_findspace(vm_map_t map, vm_offset_t start, vm_size_t length,
|
|
|
|
vm_offset_t *addr) /* OUT */
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_t entry;
|
2010-10-04 16:49:40 +00:00
|
|
|
vm_offset_t st;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2005-01-18 19:50:09 +00:00
|
|
|
/*
|
|
|
|
* Request must fit within min/max VM address and must avoid
|
|
|
|
* address wrap.
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
if (start < map->min_offset)
|
|
|
|
start = map->min_offset;
|
2005-01-18 19:50:09 +00:00
|
|
|
if (start + length > map->max_offset || start + length < start)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (1);
|
|
|
|
|
2004-08-13 08:06:34 +00:00
|
|
|
/* Empty tree means wide open address space. */
|
|
|
|
if (map->root == NULL) {
|
|
|
|
*addr = start;
|
2010-10-04 16:49:40 +00:00
|
|
|
return (0);
|
2004-08-13 08:06:34 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2004-08-13 08:06:34 +00:00
|
|
|
* After splay, if start comes before root node, then there
|
|
|
|
* must be a gap from start to the root.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2004-08-13 08:06:34 +00:00
|
|
|
map->root = vm_map_entry_splay(start, map->root);
|
|
|
|
if (start + length <= map->root->start) {
|
|
|
|
*addr = start;
|
2010-10-04 16:49:40 +00:00
|
|
|
return (0);
|
2004-08-13 08:06:34 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
|
2004-08-13 08:06:34 +00:00
|
|
|
/*
|
|
|
|
* Root is the last node that might begin its gap before
|
2005-01-18 19:50:09 +00:00
|
|
|
* start, and this is the last comparison where address
|
|
|
|
* wrap might be a problem.
|
2004-08-13 08:06:34 +00:00
|
|
|
*/
|
|
|
|
st = (start > map->root->end) ? start : map->root->end;
|
2005-01-18 19:50:09 +00:00
|
|
|
if (length <= map->root->end + map->root->adj_free - st) {
|
2004-08-13 08:06:34 +00:00
|
|
|
*addr = st;
|
2010-10-04 16:49:40 +00:00
|
|
|
return (0);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2004-08-13 08:06:34 +00:00
|
|
|
/* With max_free, can immediately tell if no solution. */
|
|
|
|
entry = map->root->right;
|
|
|
|
if (entry == NULL || length > entry->max_free)
|
|
|
|
return (1);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2004-08-13 08:06:34 +00:00
|
|
|
* Search the right subtree in the order: left subtree, root,
|
|
|
|
* right subtree (first fit). The previous splay implies that
|
|
|
|
* all regions in the right subtree have addresses > start.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2004-08-13 08:06:34 +00:00
|
|
|
while (entry != NULL) {
|
|
|
|
if (entry->left != NULL && entry->left->max_free >= length)
|
|
|
|
entry = entry->left;
|
|
|
|
else if (entry->adj_free >= length) {
|
|
|
|
*addr = entry->end;
|
2010-10-04 16:49:40 +00:00
|
|
|
return (0);
|
2004-08-13 08:06:34 +00:00
|
|
|
} else
|
|
|
|
entry = entry->right;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2004-08-13 08:06:34 +00:00
|
|
|
|
|
|
|
/* Can't get here, so panic if we do. */
|
|
|
|
panic("vm_map_findspace: max_free corrupt");
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2007-08-20 12:05:45 +00:00
|
|
|
int
|
|
|
|
vm_map_fixed(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
|
2008-04-28 05:30:23 +00:00
|
|
|
vm_offset_t start, vm_size_t length, vm_prot_t prot,
|
2007-08-20 12:05:45 +00:00
|
|
|
vm_prot_t max, int cow)
|
|
|
|
{
|
2008-04-28 05:30:23 +00:00
|
|
|
vm_offset_t end;
|
2007-08-20 12:05:45 +00:00
|
|
|
int result;
|
|
|
|
|
|
|
|
end = start + length;
|
2014-06-09 03:37:41 +00:00
|
|
|
KASSERT((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0 ||
|
|
|
|
object == NULL,
|
|
|
|
("vm_map_fixed: non-NULL backing object for stack"));
|
2009-02-08 20:39:17 +00:00
|
|
|
vm_map_lock(map);
|
2007-08-20 12:05:45 +00:00
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
2014-06-19 05:00:39 +00:00
|
|
|
if ((cow & MAP_CHECK_EXCL) == 0)
|
|
|
|
vm_map_delete(map, start, end);
|
2014-06-09 03:37:41 +00:00
|
|
|
if ((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) != 0) {
|
|
|
|
result = vm_map_stack_locked(map, start, length, sgrowsiz,
|
|
|
|
prot, max, cow);
|
|
|
|
} else {
|
|
|
|
result = vm_map_insert(map, object, offset, start, end,
|
|
|
|
prot, max, cow);
|
|
|
|
}
|
2007-08-20 12:05:45 +00:00
|
|
|
vm_map_unlock(map);
|
|
|
|
return (result);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_find finds an unallocated region in the target address
|
|
|
|
* map with the given length. The search is defined to be
|
|
|
|
* first-fit from the specified address; the region found is
|
|
|
|
* returned in the same parameter.
|
|
|
|
*
|
1999-02-12 09:51:43 +00:00
|
|
|
* If object is non-NULL, ref count must be bumped by caller
|
|
|
|
* prior to making call to account for the new entry.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_find(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
|
|
|
|
vm_offset_t *addr, /* IN/OUT */
|
2013-09-09 18:11:59 +00:00
|
|
|
vm_size_t length, vm_offset_t max_addr, int find_space,
|
|
|
|
vm_prot_t prot, vm_prot_t max, int cow)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2013-08-16 21:13:55 +00:00
|
|
|
vm_offset_t alignment, initial_addr, start;
|
2004-08-14 18:57:41 +00:00
|
|
|
int result;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2014-06-09 03:37:41 +00:00
|
|
|
KASSERT((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) == 0 ||
|
|
|
|
object == NULL,
|
|
|
|
("vm_map_find: non-NULL backing object for stack"));
|
2013-07-19 19:06:15 +00:00
|
|
|
if (find_space == VMFS_OPTIMAL_SPACE && (object == NULL ||
|
|
|
|
(object->flags & OBJ_COLORED) == 0))
|
2013-08-16 21:13:55 +00:00
|
|
|
find_space = VMFS_ANY_SPACE;
|
|
|
|
if (find_space >> 8 != 0) {
|
|
|
|
KASSERT((find_space & 0xff) == 0, ("bad VMFS flags"));
|
|
|
|
alignment = (vm_offset_t)1 << (find_space >> 8);
|
|
|
|
} else
|
|
|
|
alignment = 0;
|
2013-07-19 19:06:15 +00:00
|
|
|
initial_addr = *addr;
|
|
|
|
again:
|
|
|
|
start = initial_addr;
|
1995-11-12 08:58:58 +00:00
|
|
|
vm_map_lock(map);
|
2008-05-10 18:55:35 +00:00
|
|
|
do {
|
|
|
|
if (find_space != VMFS_NO_SPACE) {
|
2013-09-09 18:11:59 +00:00
|
|
|
if (vm_map_findspace(map, start, length, addr) ||
|
|
|
|
(max_addr != 0 && *addr + length > max_addr)) {
|
2008-05-10 18:55:35 +00:00
|
|
|
vm_map_unlock(map);
|
2013-07-19 19:06:15 +00:00
|
|
|
if (find_space == VMFS_OPTIMAL_SPACE) {
|
|
|
|
find_space = VMFS_ANY_SPACE;
|
|
|
|
goto again;
|
|
|
|
}
|
2008-05-10 18:55:35 +00:00
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
}
|
2010-04-18 22:32:07 +00:00
|
|
|
switch (find_space) {
|
2013-08-16 21:13:55 +00:00
|
|
|
case VMFS_SUPER_SPACE:
|
2013-07-19 19:06:15 +00:00
|
|
|
case VMFS_OPTIMAL_SPACE:
|
2008-05-10 18:55:35 +00:00
|
|
|
pmap_align_superpage(object, offset, addr,
|
|
|
|
length);
|
2010-04-18 22:32:07 +00:00
|
|
|
break;
|
2013-08-16 21:13:55 +00:00
|
|
|
case VMFS_ANY_SPACE:
|
|
|
|
break;
|
2010-04-18 22:32:07 +00:00
|
|
|
default:
|
2013-08-16 21:13:55 +00:00
|
|
|
if ((*addr & (alignment - 1)) != 0) {
|
|
|
|
*addr &= ~(alignment - 1);
|
|
|
|
*addr += alignment;
|
|
|
|
}
|
2010-04-18 22:32:07 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2008-05-10 18:55:35 +00:00
|
|
|
start = *addr;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2014-06-09 03:37:41 +00:00
|
|
|
if ((cow & (MAP_STACK_GROWS_DOWN | MAP_STACK_GROWS_UP)) != 0) {
|
|
|
|
result = vm_map_stack_locked(map, start, length,
|
|
|
|
sgrowsiz, prot, max, cow);
|
|
|
|
} else {
|
|
|
|
result = vm_map_insert(map, object, offset, start,
|
|
|
|
start + length, prot, max, cow);
|
|
|
|
}
|
2013-08-16 21:13:55 +00:00
|
|
|
} while (result == KERN_NO_SPACE && find_space != VMFS_NO_SPACE &&
|
|
|
|
find_space != VMFS_ANY_SPACE);
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock(map);
|
|
|
|
return (result);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
1996-12-28 23:07:49 +00:00
|
|
|
* vm_map_simplify_entry:
|
1996-07-30 03:08:57 +00:00
|
|
|
*
|
2001-02-04 06:19:28 +00:00
|
|
|
* Simplify the given map entry by merging with either neighbor. This
|
|
|
|
* routine also has the ability to merge with both neighbors.
|
|
|
|
*
|
|
|
|
* The map must be locked.
|
|
|
|
*
|
|
|
|
* This routine guarentees that the passed entry remains valid (though
|
|
|
|
* possibly extended). When merging, this routine may delete one or
|
|
|
|
* both neighbors.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2014-09-08 02:25:01 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_simplify_entry(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_t next, prev;
|
1996-12-28 23:07:49 +00:00
|
|
|
vm_size_t prevsize, esize;
|
1996-07-30 03:08:57 +00:00
|
|
|
|
2014-06-25 03:30:03 +00:00
|
|
|
if ((entry->eflags & (MAP_ENTRY_GROWS_DOWN | MAP_ENTRY_GROWS_UP |
|
|
|
|
MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_IS_SUB_MAP)) != 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
|
|
|
|
1996-03-13 01:18:14 +00:00
|
|
|
prev = entry->prev;
|
|
|
|
if (prev != &map->header) {
|
1996-07-30 03:08:57 +00:00
|
|
|
prevsize = prev->end - prev->start;
|
|
|
|
if ( (prev->end == entry->start) &&
|
|
|
|
(prev->object.vm_object == entry->object.vm_object) &&
|
|
|
|
(!prev->object.vm_object ||
|
|
|
|
(prev->offset + prevsize == entry->offset)) &&
|
1997-01-16 04:16:22 +00:00
|
|
|
(prev->eflags == entry->eflags) &&
|
1996-07-30 03:08:57 +00:00
|
|
|
(prev->protection == entry->protection) &&
|
|
|
|
(prev->max_protection == entry->max_protection) &&
|
|
|
|
(prev->inheritance == entry->inheritance) &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
(prev->wired_count == entry->wired_count) &&
|
2010-12-02 17:37:16 +00:00
|
|
|
(prev->cred == entry->cred)) {
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_unlink(map, prev);
|
|
|
|
entry->start = prev->start;
|
|
|
|
entry->offset = prev->offset;
|
2004-08-13 08:06:34 +00:00
|
|
|
if (entry->prev != &map->header)
|
|
|
|
vm_map_entry_resize_free(map, entry->prev);
|
2009-02-08 20:00:33 +00:00
|
|
|
|
|
|
|
/*
|
2009-02-08 20:52:09 +00:00
|
|
|
* If the backing object is a vnode object,
|
|
|
|
* vm_object_deallocate() calls vrele().
|
|
|
|
* However, vrele() does not lock the vnode
|
|
|
|
* because the vnode has additional
|
|
|
|
* references. Thus, the map lock can be kept
|
|
|
|
* without causing a lock-order reversal with
|
|
|
|
* the vnode lock.
|
2012-02-23 21:07:16 +00:00
|
|
|
*
|
|
|
|
* Since we count the number of virtual page
|
|
|
|
* mappings in object->un_pager.vnp.writemappings,
|
|
|
|
* the writemappings value should not be adjusted
|
|
|
|
* when the entry is disposed of.
|
2009-02-08 20:00:33 +00:00
|
|
|
*/
|
1996-05-18 03:38:05 +00:00
|
|
|
if (prev->object.vm_object)
|
|
|
|
vm_object_deallocate(prev->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (prev->cred != NULL)
|
|
|
|
crfree(prev->cred);
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_dispose(map, prev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
next = entry->next;
|
|
|
|
if (next != &map->header) {
|
1996-07-30 03:08:57 +00:00
|
|
|
esize = entry->end - entry->start;
|
|
|
|
if ((entry->end == next->start) &&
|
|
|
|
(next->object.vm_object == entry->object.vm_object) &&
|
|
|
|
(!entry->object.vm_object ||
|
|
|
|
(entry->offset + esize == next->offset)) &&
|
1997-01-16 04:16:22 +00:00
|
|
|
(next->eflags == entry->eflags) &&
|
1996-07-30 03:08:57 +00:00
|
|
|
(next->protection == entry->protection) &&
|
|
|
|
(next->max_protection == entry->max_protection) &&
|
|
|
|
(next->inheritance == entry->inheritance) &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
(next->wired_count == entry->wired_count) &&
|
2010-12-02 17:37:16 +00:00
|
|
|
(next->cred == entry->cred)) {
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_unlink(map, next);
|
|
|
|
entry->end = next->end;
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_resize_free(map, entry);
|
2009-02-08 20:00:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* See comment above.
|
|
|
|
*/
|
1996-05-18 03:38:05 +00:00
|
|
|
if (next->object.vm_object)
|
|
|
|
vm_object_deallocate(next->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (next->cred != NULL)
|
|
|
|
crfree(next->cred);
|
1996-03-13 01:18:14 +00:00
|
|
|
vm_map_entry_dispose(map, next);
|
2003-11-03 16:14:45 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* vm_map_clip_start: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Asserts that the given entry begins at or after
|
|
|
|
* the specified address; if necessary,
|
|
|
|
* it splits the entry into two.
|
|
|
|
*/
|
|
|
|
#define vm_map_clip_start(map, entry, startaddr) \
|
|
|
|
{ \
|
|
|
|
if (startaddr > entry->start) \
|
|
|
|
_vm_map_clip_start(map, entry, startaddr); \
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This routine is called only when it is known that
|
|
|
|
* the entry must be split.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
_vm_map_clip_start(vm_map_t map, vm_map_entry_t entry, vm_offset_t start)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t new_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Split off the front portion -- note that we must insert the new
|
|
|
|
* entry BEFORE this one, so that this entry has the specified
|
|
|
|
* starting address.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-03-28 04:22:17 +00:00
|
|
|
vm_map_simplify_entry(map, entry);
|
|
|
|
|
1997-07-27 04:44:12 +00:00
|
|
|
/*
|
|
|
|
* If there is no object backing this entry, we might as well create
|
|
|
|
* one now. If we defer it, an object can get created after the map
|
|
|
|
* is clipped, and individual objects will be created for the split-up
|
|
|
|
* map. This is a bit of a hack, but is also about the best place to
|
|
|
|
* put this improvement.
|
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
if (entry->object.vm_object == NULL && !map->system_map) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_t object;
|
|
|
|
object = vm_object_allocate(OBJT_DEFAULT,
|
|
|
|
atop(entry->end - entry->start));
|
|
|
|
entry->object.vm_object = object;
|
|
|
|
entry->offset = 0;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->cred != NULL) {
|
|
|
|
object->cred = entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge = entry->end - entry->start;
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
|
|
|
} else if (entry->object.vm_object != NULL &&
|
|
|
|
((entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0) &&
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred != NULL) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(entry->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
KASSERT(entry->object.vm_object->cred == NULL,
|
|
|
|
("OVERCOMMIT: vm_entry_clip_start: both cred e %p", entry));
|
|
|
|
entry->object.vm_object->cred = entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
entry->object.vm_object->charge = entry->end - entry->start;
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(entry->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = NULL;
|
1997-07-27 04:44:12 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry = vm_map_entry_create(map);
|
|
|
|
*new_entry = *entry;
|
|
|
|
|
|
|
|
new_entry->end = start;
|
|
|
|
entry->offset += (start - entry->start);
|
|
|
|
entry->start = start;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (new_entry->cred != NULL)
|
|
|
|
crhold(entry->cred);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_entry_link(map, entry->prev, new_entry);
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_object_reference(new_entry->object.vm_object);
|
2012-02-23 21:07:16 +00:00
|
|
|
/*
|
|
|
|
* The object->un_pager.vnp.writemappings for the
|
|
|
|
* object of MAP_ENTRY_VN_WRITECNT type entry shall be
|
|
|
|
* kept as is here. The virtual pages are
|
|
|
|
* re-distributed among the clipped entries, so the sum is
|
|
|
|
* left the same.
|
|
|
|
*/
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_clip_end: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Asserts that the given entry ends at or before
|
|
|
|
* the specified address; if necessary,
|
|
|
|
* it splits the entry into two.
|
|
|
|
*/
|
|
|
|
#define vm_map_clip_end(map, entry, endaddr) \
|
|
|
|
{ \
|
2002-10-16 10:52:15 +00:00
|
|
|
if ((endaddr) < (entry->end)) \
|
|
|
|
_vm_map_clip_end((map), (entry), (endaddr)); \
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This routine is called only when it is known that
|
|
|
|
* the entry must be split.
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
_vm_map_clip_end(vm_map_t map, vm_map_entry_t entry, vm_offset_t end)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t new_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
|
1997-07-27 04:44:12 +00:00
|
|
|
/*
|
|
|
|
* If there is no object backing this entry, we might as well create
|
|
|
|
* one now. If we defer it, an object can get created after the map
|
|
|
|
* is clipped, and individual objects will be created for the split-up
|
|
|
|
* map. This is a bit of a hack, but is also about the best place to
|
|
|
|
* put this improvement.
|
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
if (entry->object.vm_object == NULL && !map->system_map) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_t object;
|
|
|
|
object = vm_object_allocate(OBJT_DEFAULT,
|
|
|
|
atop(entry->end - entry->start));
|
|
|
|
entry->object.vm_object = object;
|
|
|
|
entry->offset = 0;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->cred != NULL) {
|
|
|
|
object->cred = entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge = entry->end - entry->start;
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
|
|
|
} else if (entry->object.vm_object != NULL &&
|
|
|
|
((entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0) &&
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred != NULL) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(entry->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
KASSERT(entry->object.vm_object->cred == NULL,
|
|
|
|
("OVERCOMMIT: vm_entry_clip_end: both cred e %p", entry));
|
|
|
|
entry->object.vm_object->cred = entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
entry->object.vm_object->charge = entry->end - entry->start;
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(entry->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = NULL;
|
1997-07-27 04:44:12 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Create a new entry and insert it AFTER the specified entry
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(map);
|
|
|
|
*new_entry = *entry;
|
|
|
|
|
|
|
|
new_entry->start = entry->end = end;
|
|
|
|
new_entry->offset += (end - entry->start);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (new_entry->cred != NULL)
|
|
|
|
crhold(entry->cred);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_entry_link(map, entry, new_entry);
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_object_reference(new_entry->object.vm_object);
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_submap: [ kernel use only ]
|
|
|
|
*
|
|
|
|
* Mark the given range as handled by a subordinate map.
|
|
|
|
*
|
|
|
|
* This range must have been created with vm_map_find,
|
|
|
|
* and no other operations may have been performed on this
|
|
|
|
* range prior to calling vm_map_submap.
|
|
|
|
*
|
|
|
|
* Only a limited number of operations can be performed
|
|
|
|
* within this rage after calling vm_map_submap:
|
|
|
|
* vm_fault
|
|
|
|
* [Don't try vm_map_copy!]
|
|
|
|
*
|
|
|
|
* To remove a submapping, one must first remove the
|
|
|
|
* range from the superior map, and then destroy the
|
|
|
|
* submap (if desired). [Better yet, don't try it.]
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_submap(
|
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
vm_map_t submap)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t entry;
|
1998-04-29 04:28:22 +00:00
|
|
|
int result = KERN_INVALID_ARGUMENT;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &entry)) {
|
|
|
|
vm_map_clip_start(map, entry, start);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = entry->next;
|
|
|
|
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
|
|
|
|
if ((entry->start == start) && (entry->end == end) &&
|
1999-02-07 21:48:23 +00:00
|
|
|
((entry->eflags & MAP_ENTRY_COW) == 0) &&
|
1997-01-16 04:16:22 +00:00
|
|
|
(entry->object.vm_object == NULL)) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
entry->object.sub_map = submap;
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags |= MAP_ENTRY_IS_SUB_MAP;
|
1994-05-24 10:09:53 +00:00
|
|
|
result = KERN_SUCCESS;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (result);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2003-07-03 20:18:02 +00:00
|
|
|
/*
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
* The maximum number of pages to map if MAP_PREFAULT_PARTIAL is specified
|
2003-07-03 20:18:02 +00:00
|
|
|
*/
|
|
|
|
#define MAX_INIT_PT 96
|
|
|
|
|
2003-06-29 23:32:55 +00:00
|
|
|
/*
|
|
|
|
* vm_map_pmap_enter:
|
|
|
|
*
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
* Preload the specified map's pmap with mappings to the specified
|
|
|
|
* object's memory-resident pages. No further physical pages are
|
|
|
|
* allocated, and no further virtual pages are retrieved from secondary
|
|
|
|
* storage. If the specified flags include MAP_PREFAULT_PARTIAL, then a
|
|
|
|
* limited number of page mappings are created at the low-end of the
|
|
|
|
* specified address range. (For this purpose, a superpage mapping
|
|
|
|
* counts as one page mapping.) Otherwise, all resident pages within
|
|
|
|
* the specified address range are mapped. Because these mappings are
|
|
|
|
* being created speculatively, cached pages are not reactivated and
|
Make a few small changes to vm_map_pmap_enter():
Add detail to the comment describing this function. In particular,
describe what MAP_PREFAULT_PARTIAL does.
Eliminate the abrupt change in behavior when the specified address range
grows from MAX_INIT_PT pages to MAX_INIT_PT plus one pages. Instead of
doing nothing, i.e., preloading no mappings whatsoever, map any resident
pages that fall within the start of the specified address range, i.e.,
[addr, addr + ulmin(size, ptoa(MAX_INIT_PT))).
Long ago, the vm object's list of resident pages was not ordered, so
this function had to choose between probing the global hash table of
all resident pages and iterating over the vm object's unordered list of
resident pages. Now, the list is ordered, so there is no reason for
MAP_PREFAULT_PARTIAL to be concerned with the vm object's count of
resident changes.
MFC after: 14 days
2012-11-25 19:42:36 +00:00
|
|
|
* mapped.
|
2003-06-29 23:32:55 +00:00
|
|
|
*/
|
2014-09-08 00:19:03 +00:00
|
|
|
static void
|
2004-04-24 03:46:44 +00:00
|
|
|
vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot,
|
2003-06-29 23:32:55 +00:00
|
|
|
vm_object_t object, vm_pindex_t pindex, vm_size_t size, int flags)
|
|
|
|
{
|
2007-03-25 19:33:40 +00:00
|
|
|
vm_offset_t start;
|
2006-06-05 20:35:27 +00:00
|
|
|
vm_page_t p, p_start;
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
vm_pindex_t mask, psize, threshold, tmpidx;
|
2003-06-29 23:32:55 +00:00
|
|
|
|
2005-09-03 18:20:20 +00:00
|
|
|
if ((prot & (VM_PROT_READ | VM_PROT_EXECUTE)) == 0 || object == NULL)
|
2003-07-03 20:18:02 +00:00
|
|
|
return;
|
2013-05-21 20:38:19 +00:00
|
|
|
VM_OBJECT_RLOCK(object);
|
2009-07-24 13:50:29 +00:00
|
|
|
if (object->type == OBJT_DEVICE || object->type == OBJT_SG) {
|
2013-05-21 20:38:19 +00:00
|
|
|
VM_OBJECT_RUNLOCK(object);
|
|
|
|
VM_OBJECT_WLOCK(object);
|
|
|
|
if (object->type == OBJT_DEVICE || object->type == OBJT_SG) {
|
|
|
|
pmap_object_init_pt(map->pmap, addr, object, pindex,
|
|
|
|
size);
|
|
|
|
VM_OBJECT_WUNLOCK(object);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
VM_OBJECT_LOCK_DOWNGRADE(object);
|
2003-07-03 20:18:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
psize = atop(size);
|
|
|
|
if (psize + pindex > object->size) {
|
2013-05-21 20:38:19 +00:00
|
|
|
if (object->size < pindex) {
|
|
|
|
VM_OBJECT_RUNLOCK(object);
|
|
|
|
return;
|
|
|
|
}
|
2003-07-03 20:18:02 +00:00
|
|
|
psize = object->size - pindex;
|
|
|
|
}
|
|
|
|
|
2006-06-05 20:35:27 +00:00
|
|
|
start = 0;
|
|
|
|
p_start = NULL;
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
threshold = MAX_INIT_PT;
|
2003-07-03 20:18:02 +00:00
|
|
|
|
2010-07-04 11:13:33 +00:00
|
|
|
p = vm_page_find_least(object, pindex);
|
2003-07-03 20:18:02 +00:00
|
|
|
/*
|
|
|
|
* Assert: the variable p is either (1) the page with the
|
|
|
|
* least pindex greater than or equal to the parameter pindex
|
|
|
|
* or (2) NULL.
|
|
|
|
*/
|
|
|
|
for (;
|
|
|
|
p != NULL && (tmpidx = p->pindex - pindex) < psize;
|
|
|
|
p = TAILQ_NEXT(p, listq)) {
|
|
|
|
/*
|
|
|
|
* don't allow an madvise to blow away our really
|
|
|
|
* free pages allocating pv entries.
|
|
|
|
*/
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
if (((flags & MAP_PREFAULT_MADVISE) != 0 &&
|
|
|
|
vm_cnt.v_free_count < vm_cnt.v_free_reserved) ||
|
|
|
|
((flags & MAP_PREFAULT_PARTIAL) != 0 &&
|
|
|
|
tmpidx >= threshold)) {
|
2006-06-17 08:45:01 +00:00
|
|
|
psize = tmpidx;
|
2003-07-03 20:18:02 +00:00
|
|
|
break;
|
|
|
|
}
|
2009-06-07 19:38:26 +00:00
|
|
|
if (p->valid == VM_PAGE_BITS_ALL) {
|
2006-06-05 20:35:27 +00:00
|
|
|
if (p_start == NULL) {
|
|
|
|
start = addr + ptoa(tmpidx);
|
|
|
|
p_start = p;
|
|
|
|
}
|
Add a page size field to struct vm_page. Increase the page size field when
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division
2014-06-07 17:12:26 +00:00
|
|
|
/* Jump ahead if a superpage mapping is possible. */
|
|
|
|
if (p->psind > 0 && ((addr + ptoa(tmpidx)) &
|
|
|
|
(pagesizes[p->psind] - 1)) == 0) {
|
|
|
|
mask = atop(pagesizes[p->psind]) - 1;
|
|
|
|
if (tmpidx + mask < psize &&
|
|
|
|
vm_page_ps_is_valid(p)) {
|
|
|
|
p += mask;
|
|
|
|
threshold += mask;
|
|
|
|
}
|
|
|
|
}
|
Change the management of cached pages (PQ_CACHE) in two fundamental
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
|
|
|
} else if (p_start != NULL) {
|
2006-06-05 20:35:27 +00:00
|
|
|
pmap_enter_object(map->pmap, start, addr +
|
|
|
|
ptoa(tmpidx), p_start, prot);
|
|
|
|
p_start = NULL;
|
2003-07-03 20:18:02 +00:00
|
|
|
}
|
|
|
|
}
|
2010-05-26 18:00:44 +00:00
|
|
|
if (p_start != NULL)
|
2006-06-17 08:45:01 +00:00
|
|
|
pmap_enter_object(map->pmap, start, addr + ptoa(psize),
|
|
|
|
p_start, prot);
|
2013-05-21 20:38:19 +00:00
|
|
|
VM_OBJECT_RUNLOCK(object);
|
2003-06-29 23:32:55 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_protect:
|
|
|
|
*
|
|
|
|
* Sets the protection of the specified address
|
|
|
|
* region in the target map. If "set_max" is
|
|
|
|
* specified, the maximum protection is to be set;
|
|
|
|
* otherwise, only the current protection is affected.
|
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
|
|
|
vm_prot_t new_prot, boolean_t set_max)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2009-10-27 10:15:58 +00:00
|
|
|
vm_map_entry_t current, entry;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_object_t obj;
|
2010-12-02 17:37:16 +00:00
|
|
|
struct ucred *cred;
|
2009-10-27 10:15:58 +00:00
|
|
|
vm_prot_t old_prot;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &entry)) {
|
|
|
|
vm_map_clip_start(map, entry, start);
|
1996-12-28 23:07:49 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = entry->next;
|
1996-12-28 23:07:49 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make a first pass to check for protection violations.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
current = entry;
|
|
|
|
while ((current != &map->header) && (current->start < end)) {
|
1997-01-16 04:16:22 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
1995-02-02 09:09:15 +00:00
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1995-02-02 09:09:15 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
if ((new_prot & current->max_protection) != new_prot) {
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_PROTECTION_FAILURE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
current = current->next;
|
|
|
|
}
|
|
|
|
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Do an accounting pass for private read-only mappings that
|
|
|
|
* now will do cow due to allowed write (e.g. debugger sets
|
|
|
|
* breakpoint on text segment)
|
|
|
|
*/
|
|
|
|
for (current = entry; (current != &map->header) &&
|
|
|
|
(current->start < end); current = current->next) {
|
|
|
|
|
|
|
|
vm_map_clip_end(map, current, end);
|
|
|
|
|
|
|
|
if (set_max ||
|
|
|
|
((new_prot & ~(current->protection)) & VM_PROT_WRITE) == 0 ||
|
|
|
|
ENTRY_CHARGED(current)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = curthread->td_ucred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
obj = current->object.vm_object;
|
|
|
|
|
|
|
|
if (obj == NULL || (current->eflags & MAP_ENTRY_NEEDS_COPY)) {
|
|
|
|
if (!swap_reserve(current->end - current->start)) {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_RESOURCE_SHORTAGE);
|
|
|
|
}
|
2010-12-02 17:37:16 +00:00
|
|
|
crhold(cred);
|
|
|
|
current->cred = cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(obj);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
if (obj->type != OBJT_DEFAULT && obj->type != OBJT_SWAP) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(obj);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Charge for the whole object allocation now, since
|
|
|
|
* we cannot distinguish between non-charged and
|
|
|
|
* charged clipped mapping of the same object later.
|
|
|
|
*/
|
|
|
|
KASSERT(obj->charge == 0,
|
2014-05-10 16:30:48 +00:00
|
|
|
("vm_map_protect: object %p overcharged (entry %p)",
|
|
|
|
obj, current));
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
if (!swap_reserve(ptoa(obj->size))) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(obj);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_RESOURCE_SHORTAGE);
|
|
|
|
}
|
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
crhold(cred);
|
|
|
|
obj->cred = cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
obj->charge = ptoa(obj->size);
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(obj);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Go back and fix up protections. [Note that clipping is not
|
|
|
|
* necessary the second time.]
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
current = entry;
|
|
|
|
while ((current != &map->header) && (current->start < end)) {
|
|
|
|
old_prot = current->protection;
|
2009-10-27 10:15:58 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
if (set_max)
|
|
|
|
current->protection =
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
(current->max_protection = new_prot) &
|
|
|
|
old_prot;
|
1994-05-24 10:09:53 +00:00
|
|
|
else
|
|
|
|
current->protection = new_prot;
|
|
|
|
|
2014-05-11 17:41:29 +00:00
|
|
|
/*
|
|
|
|
* For user wired map entries, the normal lazy evaluation of
|
|
|
|
* write access upgrades through soft page faults is
|
|
|
|
* undesirable. Instead, immediately copy any pages that are
|
|
|
|
* copy-on-write and enable write access in the physical map.
|
|
|
|
*/
|
|
|
|
if ((current->eflags & MAP_ENTRY_USER_WIRED) != 0 &&
|
2009-10-27 10:15:58 +00:00
|
|
|
(current->protection & VM_PROT_WRITE) != 0 &&
|
2014-05-28 00:45:35 +00:00
|
|
|
(old_prot & VM_PROT_WRITE) == 0)
|
2009-10-27 10:15:58 +00:00
|
|
|
vm_fault_copy_entry(map, map, current, current, NULL);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2009-11-02 17:45:39 +00:00
|
|
|
* When restricting access, update the physical map. Worry
|
|
|
|
* about copy-on-write here.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2009-11-02 17:45:39 +00:00
|
|
|
if ((old_prot & ~current->protection) != 0) {
|
1997-01-16 04:16:22 +00:00
|
|
|
#define MASK(entry) (((entry)->eflags & MAP_ENTRY_COW) ? ~VM_PROT_WRITE : \
|
1994-05-24 10:09:53 +00:00
|
|
|
VM_PROT_ALL)
|
1999-02-07 21:48:23 +00:00
|
|
|
pmap_protect(map->pmap, current->start,
|
|
|
|
current->end,
|
1999-06-12 23:10:38 +00:00
|
|
|
current->protection & MASK(current));
|
1994-05-24 10:09:53 +00:00
|
|
|
#undef MASK
|
|
|
|
}
|
1997-04-06 03:04:31 +00:00
|
|
|
vm_map_simplify_entry(map, current);
|
1994-05-24 10:09:53 +00:00
|
|
|
current = current->next;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1996-05-19 07:36:50 +00:00
|
|
|
/*
|
|
|
|
* vm_map_madvise:
|
|
|
|
*
|
2003-11-03 16:14:45 +00:00
|
|
|
* This routine traverses a processes map handling the madvise
|
1999-08-13 17:45:34 +00:00
|
|
|
* system call. Advisories are classified as either those effecting
|
2003-11-03 16:14:45 +00:00
|
|
|
* the vm_map_entry structure, or those effecting the underlying
|
1999-08-13 17:45:34 +00:00
|
|
|
* objects.
|
1996-05-19 07:36:50 +00:00
|
|
|
*/
|
1999-09-21 05:00:48 +00:00
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_madvise(
|
|
|
|
vm_map_t map,
|
2003-11-03 16:14:45 +00:00
|
|
|
vm_offset_t start,
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_offset_t end,
|
|
|
|
int behav)
|
1996-05-19 07:36:50 +00:00
|
|
|
{
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_entry_t current, entry;
|
1999-09-21 05:00:48 +00:00
|
|
|
int modify_map = 0;
|
1996-05-19 07:36:50 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
/*
|
|
|
|
* Some madvise calls directly modify the vm_map_entry, in which case
|
2003-11-03 16:14:45 +00:00
|
|
|
* we need to use an exclusive lock on the map and we need to perform
|
1999-09-21 05:00:48 +00:00
|
|
|
* various clipping operations. Otherwise we only need a read-lock
|
|
|
|
* on the map.
|
|
|
|
*/
|
|
|
|
switch(behav) {
|
|
|
|
case MADV_NORMAL:
|
|
|
|
case MADV_SEQUENTIAL:
|
|
|
|
case MADV_RANDOM:
|
1999-12-12 03:19:33 +00:00
|
|
|
case MADV_NOSYNC:
|
|
|
|
case MADV_AUTOSYNC:
|
2000-02-28 04:10:35 +00:00
|
|
|
case MADV_NOCORE:
|
|
|
|
case MADV_CORE:
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
1999-09-21 05:00:48 +00:00
|
|
|
modify_map = 1;
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_lock(map);
|
1999-09-21 05:00:48 +00:00
|
|
|
break;
|
|
|
|
case MADV_WILLNEED:
|
|
|
|
case MADV_DONTNEED:
|
|
|
|
case MADV_FREE:
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_lock_read(map);
|
1999-09-21 05:00:48 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return (KERN_INVALID_ARGUMENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Locate starting entry and clip if necessary.
|
|
|
|
*/
|
1996-05-19 07:36:50 +00:00
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
|
|
|
|
if (vm_map_lookup_entry(map, start, &entry)) {
|
1999-08-13 17:45:34 +00:00
|
|
|
if (modify_map)
|
|
|
|
vm_map_clip_start(map, entry, start);
|
1999-09-21 05:00:48 +00:00
|
|
|
} else {
|
1996-05-19 07:36:50 +00:00
|
|
|
entry = entry->next;
|
1999-09-21 05:00:48 +00:00
|
|
|
}
|
1996-05-19 07:36:50 +00:00
|
|
|
|
1999-08-13 17:45:34 +00:00
|
|
|
if (modify_map) {
|
|
|
|
/*
|
|
|
|
* madvise behaviors that are implemented in the vm_map_entry.
|
|
|
|
*
|
|
|
|
* We clip the vm_map_entry so that behavioral changes are
|
|
|
|
* limited to the specified address range.
|
|
|
|
*/
|
|
|
|
for (current = entry;
|
|
|
|
(current != &map->header) && (current->start < end);
|
1999-09-21 05:00:48 +00:00
|
|
|
current = current->next
|
|
|
|
) {
|
1999-08-13 17:45:34 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
vm_map_clip_end(map, current, end);
|
|
|
|
|
|
|
|
switch (behav) {
|
|
|
|
case MADV_NORMAL:
|
|
|
|
vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_NORMAL);
|
|
|
|
break;
|
|
|
|
case MADV_SEQUENTIAL:
|
|
|
|
vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_SEQUENTIAL);
|
|
|
|
break;
|
|
|
|
case MADV_RANDOM:
|
|
|
|
vm_map_entry_set_behavior(current, MAP_ENTRY_BEHAV_RANDOM);
|
|
|
|
break;
|
1999-12-12 03:19:33 +00:00
|
|
|
case MADV_NOSYNC:
|
|
|
|
current->eflags |= MAP_ENTRY_NOSYNC;
|
|
|
|
break;
|
|
|
|
case MADV_AUTOSYNC:
|
|
|
|
current->eflags &= ~MAP_ENTRY_NOSYNC;
|
|
|
|
break;
|
2000-02-28 04:10:35 +00:00
|
|
|
case MADV_NOCORE:
|
|
|
|
current->eflags |= MAP_ENTRY_NOCOREDUMP;
|
|
|
|
break;
|
|
|
|
case MADV_CORE:
|
|
|
|
current->eflags &= ~MAP_ENTRY_NOCOREDUMP;
|
|
|
|
break;
|
1999-08-13 17:45:34 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
vm_map_simplify_entry(map, current);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_unlock(map);
|
1999-09-21 05:00:48 +00:00
|
|
|
} else {
|
2012-03-19 18:47:34 +00:00
|
|
|
vm_pindex_t pstart, pend;
|
1999-08-13 17:45:34 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
/*
|
|
|
|
* madvise behaviors that are implemented in the underlying
|
|
|
|
* vm_object.
|
|
|
|
*
|
|
|
|
* Since we don't clip the vm_map_entry, we have to clip
|
|
|
|
* the vm_object pindex and count.
|
|
|
|
*/
|
|
|
|
for (current = entry;
|
|
|
|
(current != &map->header) && (current->start < end);
|
|
|
|
current = current->next
|
|
|
|
) {
|
Significantly reduce the cost, i.e., run time, of calls to madvise(...,
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
2013-08-29 15:49:05 +00:00
|
|
|
vm_offset_t useEnd, useStart;
|
2000-05-14 18:46:40 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP)
|
|
|
|
continue;
|
1999-08-13 17:45:34 +00:00
|
|
|
|
2012-03-19 18:47:34 +00:00
|
|
|
pstart = OFF_TO_IDX(current->offset);
|
|
|
|
pend = pstart + atop(current->end - current->start);
|
2000-05-14 18:46:40 +00:00
|
|
|
useStart = current->start;
|
Significantly reduce the cost, i.e., run time, of calls to madvise(...,
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
2013-08-29 15:49:05 +00:00
|
|
|
useEnd = current->end;
|
1997-01-22 01:34:48 +00:00
|
|
|
|
1999-09-21 05:00:48 +00:00
|
|
|
if (current->start < start) {
|
2012-03-19 18:47:34 +00:00
|
|
|
pstart += atop(start - current->start);
|
2000-05-14 18:46:40 +00:00
|
|
|
useStart = start;
|
1999-09-21 05:00:48 +00:00
|
|
|
}
|
Significantly reduce the cost, i.e., run time, of calls to madvise(...,
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
2013-08-29 15:49:05 +00:00
|
|
|
if (current->end > end) {
|
2012-03-19 18:47:34 +00:00
|
|
|
pend -= atop(current->end - end);
|
Significantly reduce the cost, i.e., run time, of calls to madvise(...,
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
2013-08-29 15:49:05 +00:00
|
|
|
useEnd = end;
|
|
|
|
}
|
1999-08-13 17:45:34 +00:00
|
|
|
|
2012-03-19 18:47:34 +00:00
|
|
|
if (pstart >= pend)
|
1999-09-21 05:00:48 +00:00
|
|
|
continue;
|
|
|
|
|
Significantly reduce the cost, i.e., run time, of calls to madvise(...,
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
2013-08-29 15:49:05 +00:00
|
|
|
/*
|
|
|
|
* Perform the pmap_advise() before clearing
|
|
|
|
* PGA_REFERENCED in vm_page_advise(). Otherwise, a
|
|
|
|
* concurrent pmap operation, such as pmap_remove(),
|
|
|
|
* could clear a reference in the pmap and set
|
|
|
|
* PGA_REFERENCED on the page before the pmap_advise()
|
|
|
|
* had completed. Consequently, the page would appear
|
|
|
|
* referenced based upon an old reference that
|
|
|
|
* occurred before this pmap_advise() ran.
|
|
|
|
*/
|
|
|
|
if (behav == MADV_DONTNEED || behav == MADV_FREE)
|
|
|
|
pmap_advise(map->pmap, useStart, useEnd,
|
|
|
|
behav);
|
|
|
|
|
2012-03-19 18:47:34 +00:00
|
|
|
vm_object_madvise(current->object.vm_object, pstart,
|
|
|
|
pend, behav);
|
2014-09-23 18:54:23 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Pre-populate paging structures in the
|
|
|
|
* WILLNEED case. For wired entries, the
|
|
|
|
* paging structures are already populated.
|
|
|
|
*/
|
|
|
|
if (behav == MADV_WILLNEED &&
|
|
|
|
current->wired_count == 0) {
|
2003-11-03 16:14:45 +00:00
|
|
|
vm_map_pmap_enter(map,
|
2000-05-14 18:46:40 +00:00
|
|
|
useStart,
|
2004-04-24 03:46:44 +00:00
|
|
|
current->protection,
|
1999-09-21 05:00:48 +00:00
|
|
|
current->object.vm_object,
|
2012-03-19 18:47:34 +00:00
|
|
|
pstart,
|
|
|
|
ptoa(pend - pstart),
|
2001-10-31 03:06:33 +00:00
|
|
|
MAP_PREFAULT_MADVISE
|
1999-09-21 05:00:48 +00:00
|
|
|
);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
|
|
|
}
|
1999-08-13 17:45:34 +00:00
|
|
|
vm_map_unlock_read(map);
|
1996-05-19 07:36:50 +00:00
|
|
|
}
|
2002-03-10 21:52:48 +00:00
|
|
|
return (0);
|
2003-11-03 16:14:45 +00:00
|
|
|
}
|
1996-05-19 07:36:50 +00:00
|
|
|
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_inherit:
|
|
|
|
*
|
|
|
|
* Sets the inheritance of the specified address
|
|
|
|
* range in the target map. Inheritance
|
|
|
|
* affects how the map will be shared with
|
2008-12-31 05:44:05 +00:00
|
|
|
* child maps at the time of vmspace_fork.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_inherit(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
|
|
|
vm_inherit_t new_inheritance)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t temp_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
switch (new_inheritance) {
|
|
|
|
case VM_INHERIT_NONE:
|
|
|
|
case VM_INHERIT_COPY:
|
|
|
|
case VM_INHERIT_SHARE:
|
|
|
|
break;
|
|
|
|
default:
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
if (vm_map_lookup_entry(map, start, &temp_entry)) {
|
|
|
|
entry = temp_entry;
|
|
|
|
vm_map_clip_start(map, entry, start);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = temp_entry->next;
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
entry->inheritance = new_inheritance;
|
1999-03-15 06:24:52 +00:00
|
|
|
vm_map_simplify_entry(map, entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = entry->next;
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2002-06-07 18:34:23 +00:00
|
|
|
/*
|
|
|
|
* vm_map_unwire:
|
|
|
|
*
|
2002-06-08 07:32:38 +00:00
|
|
|
* Implements both kernel and user unwiring.
|
2002-06-07 18:34:23 +00:00
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_map_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
2003-08-11 07:14:08 +00:00
|
|
|
int flags)
|
2002-06-07 18:34:23 +00:00
|
|
|
{
|
|
|
|
vm_map_entry_t entry, first_entry, tmp_entry;
|
|
|
|
vm_offset_t saved_start;
|
|
|
|
unsigned int last_timestamp;
|
|
|
|
int rv;
|
2003-08-11 07:14:08 +00:00
|
|
|
boolean_t need_wakeup, result, user_unwire;
|
2002-06-07 18:34:23 +00:00
|
|
|
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
2003-08-11 07:14:08 +00:00
|
|
|
user_unwire = (flags & VM_MAP_WIRE_USER) ? TRUE : FALSE;
|
2002-06-07 18:34:23 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
if (!vm_map_lookup_entry(map, start, &first_entry)) {
|
2003-08-11 07:14:08 +00:00
|
|
|
if (flags & VM_MAP_WIRE_HOLESOK)
|
2003-10-18 18:48:17 +00:00
|
|
|
first_entry = first_entry->next;
|
2003-08-11 07:14:08 +00:00
|
|
|
else {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
2002-06-07 18:34:23 +00:00
|
|
|
}
|
|
|
|
last_timestamp = map->timestamp;
|
|
|
|
entry = first_entry;
|
|
|
|
while (entry != &map->header && entry->start < end) {
|
|
|
|
if (entry->eflags & MAP_ENTRY_IN_TRANSITION) {
|
|
|
|
/*
|
|
|
|
* We have not yet clipped the entry.
|
|
|
|
*/
|
|
|
|
saved_start = (start >= entry->start) ? start :
|
|
|
|
entry->start;
|
|
|
|
entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP;
|
2007-11-07 21:56:58 +00:00
|
|
|
if (vm_map_unlock_and_wait(map, 0)) {
|
2002-06-07 18:34:23 +00:00
|
|
|
/*
|
|
|
|
* Allow interruption of user unwiring?
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
vm_map_lock(map);
|
|
|
|
if (last_timestamp+1 != map->timestamp) {
|
|
|
|
/*
|
|
|
|
* Look again for the entry because the map was
|
|
|
|
* modified while it was unlocked.
|
|
|
|
* Specifically, the entry may have been
|
|
|
|
* clipped, merged, or deleted.
|
|
|
|
*/
|
|
|
|
if (!vm_map_lookup_entry(map, saved_start,
|
|
|
|
&tmp_entry)) {
|
2003-10-18 18:48:17 +00:00
|
|
|
if (flags & VM_MAP_WIRE_HOLESOK)
|
|
|
|
tmp_entry = tmp_entry->next;
|
|
|
|
else {
|
|
|
|
if (saved_start == start) {
|
|
|
|
/*
|
|
|
|
* First_entry has been deleted.
|
|
|
|
*/
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
|
|
|
end = saved_start;
|
|
|
|
rv = KERN_INVALID_ADDRESS;
|
|
|
|
goto done;
|
2002-06-07 18:34:23 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (entry == first_entry)
|
|
|
|
first_entry = tmp_entry;
|
|
|
|
else
|
|
|
|
first_entry = NULL;
|
|
|
|
entry = tmp_entry;
|
|
|
|
}
|
|
|
|
last_timestamp = map->timestamp;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
/*
|
|
|
|
* Mark the entry in case the map lock is released. (See
|
|
|
|
* above.)
|
|
|
|
*/
|
2013-11-20 08:47:54 +00:00
|
|
|
KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 &&
|
|
|
|
entry->wiring_thread == NULL,
|
|
|
|
("owned map entry %p", entry));
|
2002-06-07 18:34:23 +00:00
|
|
|
entry->eflags |= MAP_ENTRY_IN_TRANSITION;
|
2013-07-11 05:55:08 +00:00
|
|
|
entry->wiring_thread = curthread;
|
2002-06-07 18:34:23 +00:00
|
|
|
/*
|
|
|
|
* Check the map for holes in the specified region.
|
2003-08-11 07:14:08 +00:00
|
|
|
* If VM_MAP_WIRE_HOLESOK was specified, skip this check.
|
2002-06-07 18:34:23 +00:00
|
|
|
*/
|
2003-08-11 07:14:08 +00:00
|
|
|
if (((flags & VM_MAP_WIRE_HOLESOK) == 0) &&
|
|
|
|
(entry->end < end && (entry->next == &map->header ||
|
|
|
|
entry->next->start > entry->end))) {
|
2002-06-07 18:34:23 +00:00
|
|
|
end = entry->end;
|
|
|
|
rv = KERN_INVALID_ADDRESS;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
/*
|
2004-05-25 05:51:17 +00:00
|
|
|
* If system unwiring, require that the entry is system wired.
|
2002-06-07 18:34:23 +00:00
|
|
|
*/
|
2004-08-10 14:42:48 +00:00
|
|
|
if (!user_unwire &&
|
|
|
|
vm_map_entry_system_wired_count(entry) == 0) {
|
2002-06-07 18:34:23 +00:00
|
|
|
end = entry->end;
|
|
|
|
rv = KERN_INVALID_ARGUMENT;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
entry = entry->next;
|
|
|
|
}
|
|
|
|
rv = KERN_SUCCESS;
|
|
|
|
done:
|
2002-06-08 07:32:38 +00:00
|
|
|
need_wakeup = FALSE;
|
2002-06-07 18:34:23 +00:00
|
|
|
if (first_entry == NULL) {
|
|
|
|
result = vm_map_lookup_entry(map, start, &first_entry);
|
2003-10-18 18:48:17 +00:00
|
|
|
if (!result && (flags & VM_MAP_WIRE_HOLESOK))
|
|
|
|
first_entry = first_entry->next;
|
|
|
|
else
|
|
|
|
KASSERT(result, ("vm_map_unwire: lookup failed"));
|
2002-06-07 18:34:23 +00:00
|
|
|
}
|
2013-07-11 05:55:08 +00:00
|
|
|
for (entry = first_entry; entry != &map->header && entry->start < end;
|
|
|
|
entry = entry->next) {
|
|
|
|
/*
|
|
|
|
* If VM_MAP_WIRE_HOLESOK was specified, an empty
|
|
|
|
* space in the unwired region could have been mapped
|
|
|
|
* while the map lock was dropped for draining
|
|
|
|
* MAP_ENTRY_IN_TRANSITION. Moreover, another thread
|
|
|
|
* could be simultaneously wiring this new mapping
|
|
|
|
* entry. Detect these cases and skip any entries
|
|
|
|
* marked as in transition by us.
|
|
|
|
*/
|
|
|
|
if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 ||
|
|
|
|
entry->wiring_thread != curthread) {
|
|
|
|
KASSERT((flags & VM_MAP_WIRE_HOLESOK) != 0,
|
|
|
|
("vm_map_unwire: !HOLESOK and new/changed entry"));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2004-05-25 05:51:17 +00:00
|
|
|
if (rv == KERN_SUCCESS && (!user_unwire ||
|
|
|
|
(entry->eflags & MAP_ENTRY_USER_WIRED))) {
|
2002-06-08 19:00:40 +00:00
|
|
|
if (user_unwire)
|
|
|
|
entry->eflags &= ~MAP_ENTRY_USER_WIRED;
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
if (entry->wired_count == 1)
|
|
|
|
vm_map_entry_unwire(map, entry);
|
|
|
|
else
|
|
|
|
entry->wired_count--;
|
2002-06-08 19:00:40 +00:00
|
|
|
}
|
2013-07-11 05:55:08 +00:00
|
|
|
KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0,
|
2013-11-20 08:47:54 +00:00
|
|
|
("vm_map_unwire: in-transition flag missing %p", entry));
|
|
|
|
KASSERT(entry->wiring_thread == curthread,
|
|
|
|
("vm_map_unwire: alien wire %p", entry));
|
2002-06-07 18:34:23 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_IN_TRANSITION;
|
2013-07-11 05:55:08 +00:00
|
|
|
entry->wiring_thread = NULL;
|
2002-06-07 18:34:23 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_NEEDS_WAKEUP) {
|
|
|
|
entry->eflags &= ~MAP_ENTRY_NEEDS_WAKEUP;
|
|
|
|
need_wakeup = TRUE;
|
|
|
|
}
|
|
|
|
vm_map_simplify_entry(map, entry);
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
|
|
|
if (need_wakeup)
|
|
|
|
vm_map_wakeup(map);
|
|
|
|
return (rv);
|
|
|
|
}
|
|
|
|
|
2014-08-02 16:10:24 +00:00
|
|
|
/*
|
|
|
|
* vm_map_wire_entry_failure:
|
|
|
|
*
|
|
|
|
* Handle a wiring failure on the given entry.
|
|
|
|
*
|
|
|
|
* The map should be locked.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vm_map_wire_entry_failure(vm_map_t map, vm_map_entry_t entry,
|
|
|
|
vm_offset_t failed_addr)
|
|
|
|
{
|
|
|
|
|
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0 &&
|
|
|
|
entry->wired_count == 1,
|
|
|
|
("vm_map_wire_entry_failure: entry %p isn't being wired", entry));
|
|
|
|
KASSERT(failed_addr < entry->end,
|
|
|
|
("vm_map_wire_entry_failure: entry %p was fully wired", entry));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If any pages at the start of this entry were successfully wired,
|
|
|
|
* then unwire them.
|
|
|
|
*/
|
|
|
|
if (failed_addr > entry->start) {
|
|
|
|
pmap_unwire(map->pmap, entry->start, failed_addr);
|
|
|
|
vm_object_unwire(entry->object.vm_object, entry->offset,
|
|
|
|
failed_addr - entry->start, PQ_ACTIVE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Assign an out-of-range value to represent the failure to wire this
|
|
|
|
* entry.
|
|
|
|
*/
|
|
|
|
entry->wired_count = -1;
|
|
|
|
}
|
|
|
|
|
2002-06-08 07:32:38 +00:00
|
|
|
/*
|
|
|
|
* vm_map_wire:
|
|
|
|
*
|
|
|
|
* Implements both kernel and user wiring.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_map_wire(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
2003-08-11 07:14:08 +00:00
|
|
|
int flags)
|
2002-06-08 07:32:38 +00:00
|
|
|
{
|
2002-06-09 20:25:18 +00:00
|
|
|
vm_map_entry_t entry, first_entry, tmp_entry;
|
2014-08-02 16:10:24 +00:00
|
|
|
vm_offset_t faddr, saved_end, saved_start;
|
2002-06-09 20:25:18 +00:00
|
|
|
unsigned int last_timestamp;
|
|
|
|
int rv;
|
2014-08-02 16:10:24 +00:00
|
|
|
boolean_t need_wakeup, result, user_wire;
|
2011-03-21 09:40:01 +00:00
|
|
|
vm_prot_t prot;
|
2002-06-08 07:32:38 +00:00
|
|
|
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
2011-03-21 09:40:01 +00:00
|
|
|
prot = 0;
|
|
|
|
if (flags & VM_MAP_WIRE_WRITE)
|
|
|
|
prot |= VM_PROT_WRITE;
|
2003-08-11 07:14:08 +00:00
|
|
|
user_wire = (flags & VM_MAP_WIRE_USER) ? TRUE : FALSE;
|
2002-06-09 20:25:18 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
if (!vm_map_lookup_entry(map, start, &first_entry)) {
|
2003-08-11 07:14:08 +00:00
|
|
|
if (flags & VM_MAP_WIRE_HOLESOK)
|
2003-10-18 18:48:17 +00:00
|
|
|
first_entry = first_entry->next;
|
2003-08-11 07:14:08 +00:00
|
|
|
else {
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
2002-06-09 20:25:18 +00:00
|
|
|
}
|
|
|
|
last_timestamp = map->timestamp;
|
|
|
|
entry = first_entry;
|
|
|
|
while (entry != &map->header && entry->start < end) {
|
|
|
|
if (entry->eflags & MAP_ENTRY_IN_TRANSITION) {
|
|
|
|
/*
|
|
|
|
* We have not yet clipped the entry.
|
|
|
|
*/
|
|
|
|
saved_start = (start >= entry->start) ? start :
|
|
|
|
entry->start;
|
|
|
|
entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP;
|
2007-11-07 21:56:58 +00:00
|
|
|
if (vm_map_unlock_and_wait(map, 0)) {
|
2002-06-09 20:25:18 +00:00
|
|
|
/*
|
|
|
|
* Allow interruption of user wiring?
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
vm_map_lock(map);
|
2002-06-11 19:13:59 +00:00
|
|
|
if (last_timestamp + 1 != map->timestamp) {
|
2002-06-09 20:25:18 +00:00
|
|
|
/*
|
|
|
|
* Look again for the entry because the map was
|
|
|
|
* modified while it was unlocked.
|
|
|
|
* Specifically, the entry may have been
|
|
|
|
* clipped, merged, or deleted.
|
|
|
|
*/
|
|
|
|
if (!vm_map_lookup_entry(map, saved_start,
|
|
|
|
&tmp_entry)) {
|
2003-10-18 18:48:17 +00:00
|
|
|
if (flags & VM_MAP_WIRE_HOLESOK)
|
|
|
|
tmp_entry = tmp_entry->next;
|
|
|
|
else {
|
|
|
|
if (saved_start == start) {
|
|
|
|
/*
|
|
|
|
* first_entry has been deleted.
|
|
|
|
*/
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
|
|
|
end = saved_start;
|
|
|
|
rv = KERN_INVALID_ADDRESS;
|
|
|
|
goto done;
|
2002-06-09 20:25:18 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (entry == first_entry)
|
|
|
|
first_entry = tmp_entry;
|
|
|
|
else
|
|
|
|
first_entry = NULL;
|
|
|
|
entry = tmp_entry;
|
|
|
|
}
|
|
|
|
last_timestamp = map->timestamp;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
/*
|
|
|
|
* Mark the entry in case the map lock is released. (See
|
|
|
|
* above.)
|
|
|
|
*/
|
2013-11-20 08:47:54 +00:00
|
|
|
KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 &&
|
|
|
|
entry->wiring_thread == NULL,
|
|
|
|
("owned map entry %p", entry));
|
2002-06-09 20:25:18 +00:00
|
|
|
entry->eflags |= MAP_ENTRY_IN_TRANSITION;
|
2013-07-11 05:55:08 +00:00
|
|
|
entry->wiring_thread = curthread;
|
2011-03-21 09:40:01 +00:00
|
|
|
if ((entry->protection & (VM_PROT_READ | VM_PROT_EXECUTE)) == 0
|
|
|
|
|| (entry->protection & prot) != prot) {
|
|
|
|
entry->eflags |= MAP_ENTRY_WIRE_SKIPPED;
|
|
|
|
if ((flags & VM_MAP_WIRE_HOLESOK) == 0) {
|
|
|
|
end = entry->end;
|
|
|
|
rv = KERN_INVALID_ADDRESS;
|
|
|
|
goto done;
|
2009-04-10 10:16:03 +00:00
|
|
|
}
|
2011-03-21 09:40:01 +00:00
|
|
|
goto next_entry;
|
|
|
|
}
|
|
|
|
if (entry->wired_count == 0) {
|
2004-08-10 14:42:48 +00:00
|
|
|
entry->wired_count++;
|
2002-06-09 20:25:18 +00:00
|
|
|
saved_start = entry->start;
|
|
|
|
saved_end = entry->end;
|
2014-08-02 16:10:24 +00:00
|
|
|
|
2002-06-09 20:25:18 +00:00
|
|
|
/*
|
|
|
|
* Release the map lock, relying on the in-transition
|
2010-12-09 21:02:22 +00:00
|
|
|
* mark. Mark the map busy for fork.
|
2002-06-09 20:25:18 +00:00
|
|
|
*/
|
2010-12-09 21:02:22 +00:00
|
|
|
vm_map_busy(map);
|
2002-06-09 20:25:18 +00:00
|
|
|
vm_map_unlock(map);
|
2014-08-02 16:10:24 +00:00
|
|
|
|
2014-08-02 17:58:20 +00:00
|
|
|
faddr = saved_start;
|
|
|
|
do {
|
2014-08-02 16:10:24 +00:00
|
|
|
/*
|
|
|
|
* Simulate a fault to get the page and enter
|
|
|
|
* it into the physical map.
|
|
|
|
*/
|
|
|
|
if ((rv = vm_fault(map, faddr, VM_PROT_NONE,
|
2015-07-30 18:28:34 +00:00
|
|
|
VM_FAULT_WIRE)) != KERN_SUCCESS)
|
2014-08-02 16:10:24 +00:00
|
|
|
break;
|
2014-08-02 17:58:20 +00:00
|
|
|
} while ((faddr += PAGE_SIZE) < saved_end);
|
2002-06-09 20:25:18 +00:00
|
|
|
vm_map_lock(map);
|
2010-12-09 21:02:22 +00:00
|
|
|
vm_map_unbusy(map);
|
2002-06-11 19:13:59 +00:00
|
|
|
if (last_timestamp + 1 != map->timestamp) {
|
2002-06-09 20:25:18 +00:00
|
|
|
/*
|
|
|
|
* Look again for the entry because the map was
|
|
|
|
* modified while it was unlocked. The entry
|
|
|
|
* may have been clipped, but NOT merged or
|
|
|
|
* deleted.
|
|
|
|
*/
|
|
|
|
result = vm_map_lookup_entry(map, saved_start,
|
|
|
|
&tmp_entry);
|
|
|
|
KASSERT(result, ("vm_map_wire: lookup failed"));
|
|
|
|
if (entry == first_entry)
|
|
|
|
first_entry = tmp_entry;
|
|
|
|
else
|
|
|
|
first_entry = NULL;
|
|
|
|
entry = tmp_entry;
|
2002-06-11 19:13:59 +00:00
|
|
|
while (entry->end < saved_end) {
|
2014-08-02 16:10:24 +00:00
|
|
|
/*
|
|
|
|
* In case of failure, handle entries
|
|
|
|
* that were not fully wired here;
|
|
|
|
* fully wired entries are handled
|
|
|
|
* later.
|
|
|
|
*/
|
|
|
|
if (rv != KERN_SUCCESS &&
|
|
|
|
faddr < entry->end)
|
|
|
|
vm_map_wire_entry_failure(map,
|
|
|
|
entry, faddr);
|
2002-06-09 20:25:18 +00:00
|
|
|
entry = entry->next;
|
2002-06-11 19:13:59 +00:00
|
|
|
}
|
2002-06-09 20:25:18 +00:00
|
|
|
}
|
|
|
|
last_timestamp = map->timestamp;
|
|
|
|
if (rv != KERN_SUCCESS) {
|
2014-08-02 16:10:24 +00:00
|
|
|
vm_map_wire_entry_failure(map, entry, faddr);
|
2002-06-09 20:25:18 +00:00
|
|
|
end = entry->end;
|
|
|
|
goto done;
|
|
|
|
}
|
2004-08-10 14:42:48 +00:00
|
|
|
} else if (!user_wire ||
|
|
|
|
(entry->eflags & MAP_ENTRY_USER_WIRED) == 0) {
|
|
|
|
entry->wired_count++;
|
2002-06-09 20:25:18 +00:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Check the map for holes in the specified region.
|
2003-08-11 07:14:08 +00:00
|
|
|
* If VM_MAP_WIRE_HOLESOK was specified, skip this check.
|
2002-06-09 20:25:18 +00:00
|
|
|
*/
|
2009-04-10 10:16:03 +00:00
|
|
|
next_entry:
|
2003-08-11 07:14:08 +00:00
|
|
|
if (((flags & VM_MAP_WIRE_HOLESOK) == 0) &&
|
|
|
|
(entry->end < end && (entry->next == &map->header ||
|
|
|
|
entry->next->start > entry->end))) {
|
2002-06-09 20:25:18 +00:00
|
|
|
end = entry->end;
|
|
|
|
rv = KERN_INVALID_ADDRESS;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
entry = entry->next;
|
|
|
|
}
|
|
|
|
rv = KERN_SUCCESS;
|
|
|
|
done:
|
|
|
|
need_wakeup = FALSE;
|
|
|
|
if (first_entry == NULL) {
|
|
|
|
result = vm_map_lookup_entry(map, start, &first_entry);
|
2003-10-18 18:48:17 +00:00
|
|
|
if (!result && (flags & VM_MAP_WIRE_HOLESOK))
|
|
|
|
first_entry = first_entry->next;
|
|
|
|
else
|
|
|
|
KASSERT(result, ("vm_map_wire: lookup failed"));
|
2002-06-09 20:25:18 +00:00
|
|
|
}
|
2013-07-11 05:55:08 +00:00
|
|
|
for (entry = first_entry; entry != &map->header && entry->start < end;
|
|
|
|
entry = entry->next) {
|
2009-04-10 10:16:03 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_WIRE_SKIPPED) != 0)
|
|
|
|
goto next_entry_done;
|
2013-07-11 05:55:08 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If VM_MAP_WIRE_HOLESOK was specified, an empty
|
|
|
|
* space in the unwired region could have been mapped
|
|
|
|
* while the map lock was dropped for faulting in the
|
|
|
|
* pages or draining MAP_ENTRY_IN_TRANSITION.
|
|
|
|
* Moreover, another thread could be simultaneously
|
|
|
|
* wiring this new mapping entry. Detect these cases
|
|
|
|
* and skip any entries marked as in transition by us.
|
|
|
|
*/
|
|
|
|
if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) == 0 ||
|
|
|
|
entry->wiring_thread != curthread) {
|
|
|
|
KASSERT((flags & VM_MAP_WIRE_HOLESOK) != 0,
|
|
|
|
("vm_map_wire: !HOLESOK and new/changed entry"));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2002-06-09 20:25:18 +00:00
|
|
|
if (rv == KERN_SUCCESS) {
|
|
|
|
if (user_wire)
|
|
|
|
entry->eflags |= MAP_ENTRY_USER_WIRED;
|
2002-06-11 19:13:59 +00:00
|
|
|
} else if (entry->wired_count == -1) {
|
|
|
|
/*
|
|
|
|
* Wiring failed on this entry. Thus, unwiring is
|
|
|
|
* unnecessary.
|
|
|
|
*/
|
|
|
|
entry->wired_count = 0;
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
} else if (!user_wire ||
|
|
|
|
(entry->eflags & MAP_ENTRY_USER_WIRED) == 0) {
|
2014-08-02 16:10:24 +00:00
|
|
|
/*
|
|
|
|
* Undo the wiring. Wiring succeeded on this entry
|
|
|
|
* but failed on a later entry.
|
|
|
|
*/
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
if (entry->wired_count == 1)
|
|
|
|
vm_map_entry_unwire(map, entry);
|
|
|
|
else
|
2002-06-09 20:25:18 +00:00
|
|
|
entry->wired_count--;
|
|
|
|
}
|
2009-04-10 10:16:03 +00:00
|
|
|
next_entry_done:
|
2013-07-11 05:55:08 +00:00
|
|
|
KASSERT((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0,
|
|
|
|
("vm_map_wire: in-transition flag missing %p", entry));
|
|
|
|
KASSERT(entry->wiring_thread == curthread,
|
|
|
|
("vm_map_wire: alien wire %p", entry));
|
|
|
|
entry->eflags &= ~(MAP_ENTRY_IN_TRANSITION |
|
|
|
|
MAP_ENTRY_WIRE_SKIPPED);
|
|
|
|
entry->wiring_thread = NULL;
|
2002-06-09 20:25:18 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_NEEDS_WAKEUP) {
|
|
|
|
entry->eflags &= ~MAP_ENTRY_NEEDS_WAKEUP;
|
|
|
|
need_wakeup = TRUE;
|
|
|
|
}
|
|
|
|
vm_map_simplify_entry(map, entry);
|
|
|
|
}
|
|
|
|
vm_map_unlock(map);
|
|
|
|
if (need_wakeup)
|
|
|
|
vm_map_wakeup(map);
|
|
|
|
return (rv);
|
2002-06-08 07:32:38 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2003-11-09 05:25:35 +00:00
|
|
|
* vm_map_sync
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
* Push any dirty cached pages in the address range to their pager.
|
|
|
|
* If syncio is TRUE, dirty pages are written synchronously.
|
|
|
|
* If invalidate is TRUE, any cached pages are freed as well.
|
|
|
|
*
|
2003-11-09 22:09:04 +00:00
|
|
|
* If the size of the region from start to end is zero, we are
|
|
|
|
* supposed to flush all modified pages within the region containing
|
|
|
|
* start. Unfortunately, a region can be split or coalesced with
|
|
|
|
* neighboring regions, making it difficult to determine what the
|
|
|
|
* original region was. Therefore, we approximate this requirement by
|
|
|
|
* flushing the current region containing start.
|
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Returns an error if any part of the specified range is not mapped.
|
|
|
|
*/
|
|
|
|
int
|
2003-11-09 05:25:35 +00:00
|
|
|
vm_map_sync(
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_t map,
|
|
|
|
vm_offset_t start,
|
|
|
|
vm_offset_t end,
|
|
|
|
boolean_t syncio,
|
|
|
|
boolean_t invalidate)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t current;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_t entry;
|
|
|
|
vm_size_t size;
|
|
|
|
vm_object_t object;
|
1995-12-11 04:58:34 +00:00
|
|
|
vm_ooffset_t offset;
|
2009-02-08 20:30:51 +00:00
|
|
|
unsigned int last_timestamp;
|
2012-03-17 23:00:32 +00:00
|
|
|
boolean_t failed;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock_read(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
|
|
|
if (!vm_map_lookup_entry(map, start, &entry)) {
|
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ADDRESS);
|
2003-11-09 22:09:04 +00:00
|
|
|
} else if (start == end) {
|
|
|
|
start = entry->start;
|
|
|
|
end = entry->end;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
2003-11-14 06:55:11 +00:00
|
|
|
* Make a first pass to check for user-wired memory and holes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2007-10-22 05:21:05 +00:00
|
|
|
for (current = entry; current != &map->header && current->start < end;
|
|
|
|
current = current->next) {
|
2003-11-14 06:55:11 +00:00
|
|
|
if (invalidate && (current->eflags & MAP_ENTRY_USER_WIRED)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ARGUMENT);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
if (end > current->end &&
|
|
|
|
(current->next == &map->header ||
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
current->end != current->next->start)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock_read(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_INVALID_ADDRESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-07-21 23:22:49 +00:00
|
|
|
if (invalidate)
|
2002-12-01 18:57:56 +00:00
|
|
|
pmap_remove(map->pmap, start, end);
|
2012-03-17 23:00:32 +00:00
|
|
|
failed = FALSE;
|
2006-07-21 23:22:49 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Make a second pass, cleaning/uncaching pages from the indicated
|
|
|
|
* objects as we go.
|
|
|
|
*/
|
2009-02-08 20:30:51 +00:00
|
|
|
for (current = entry; current != &map->header && current->start < end;) {
|
1994-05-24 10:09:53 +00:00
|
|
|
offset = current->offset + (start - current->start);
|
|
|
|
size = (end <= current->end ? end : current->end) - start;
|
1999-02-07 21:48:23 +00:00
|
|
|
if (current->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_t smap;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_t tentry;
|
|
|
|
vm_size_t tsize;
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
smap = current->object.sub_map;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_lock_read(smap);
|
|
|
|
(void) vm_map_lookup_entry(smap, offset, &tentry);
|
|
|
|
tsize = tentry->end - offset;
|
|
|
|
if (tsize < size)
|
|
|
|
size = tsize;
|
|
|
|
object = tentry->object.vm_object;
|
|
|
|
offset = tentry->offset + (offset - tentry->start);
|
|
|
|
vm_map_unlock_read(smap);
|
|
|
|
} else {
|
|
|
|
object = current->object.vm_object;
|
|
|
|
}
|
2009-02-08 20:30:51 +00:00
|
|
|
vm_object_reference(object);
|
|
|
|
last_timestamp = map->timestamp;
|
|
|
|
vm_map_unlock_read(map);
|
2012-03-17 23:00:32 +00:00
|
|
|
if (!vm_object_sync(object, offset, size, syncio, invalidate))
|
|
|
|
failed = TRUE;
|
1994-05-24 10:09:53 +00:00
|
|
|
start += size;
|
2009-02-08 20:30:51 +00:00
|
|
|
vm_object_deallocate(object);
|
|
|
|
vm_map_lock_read(map);
|
|
|
|
if (last_timestamp == map->timestamp ||
|
|
|
|
!vm_map_lookup_entry(map, start, ¤t))
|
|
|
|
current = current->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
vm_map_unlock_read(map);
|
2012-03-17 23:00:32 +00:00
|
|
|
return (failed ? KERN_FAILURE : KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_entry_unwire: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Make the region specified by this entry pageable.
|
|
|
|
*
|
|
|
|
* The map in question should be locked.
|
|
|
|
* [This is the reason for this routine's existence.]
|
|
|
|
*/
|
2003-11-03 16:14:45 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. If, for example, the
application has performed an mprotect(..., PROT_NONE) on any part of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.
At the same time, switch from using pmap_change_wiring() to the recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual addresses
rather than a single virtual page at a time. Moreover, by operating on
a range, it is superpage friendly. It doesn't waste time performing
unnecessary demotions.
Reported by: markj
Reviewed by: kib
Tested by: pho, jmg (arm)
Sponsored by: EMC / Isilon Storage Division
2014-07-26 18:10:18 +00:00
|
|
|
|
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
|
|
|
KASSERT(entry->wired_count > 0,
|
|
|
|
("vm_map_entry_unwire: entry %p isn't wired", entry));
|
|
|
|
pmap_unwire(map->pmap, entry->start, entry->end);
|
|
|
|
vm_object_unwire(entry->object.vm_object, entry->offset, entry->end -
|
|
|
|
entry->start, PQ_ACTIVE);
|
1994-05-24 10:09:53 +00:00
|
|
|
entry->wired_count = 0;
|
|
|
|
}
|
|
|
|
|
2010-09-18 15:03:31 +00:00
|
|
|
static void
|
|
|
|
vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t system_map)
|
|
|
|
{
|
|
|
|
|
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0)
|
|
|
|
vm_object_deallocate(entry->object.vm_object);
|
|
|
|
uma_zfree(system_map ? kmapentzone : mapentzone, entry);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_entry_delete: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Deallocate the given entry from the target map.
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
*/
|
1996-12-07 07:44:05 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_entry_delete(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2003-11-05 05:48:22 +00:00
|
|
|
vm_object_t object;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_pindex_t offidxstart, offidxend, count, size1;
|
|
|
|
vm_ooffset_t size;
|
2003-11-05 05:48:22 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_unlink(map, entry);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object = entry->object.vm_object;
|
|
|
|
size = entry->end - entry->start;
|
|
|
|
map->size -= size;
|
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->cred != NULL) {
|
|
|
|
swap_release_by_cred(size, entry->cred);
|
|
|
|
crfree(entry->cred);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2003-11-05 05:48:22 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0 &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
(object != NULL)) {
|
2010-12-02 17:37:16 +00:00
|
|
|
KASSERT(entry->cred == NULL || object->cred == NULL ||
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
(entry->eflags & MAP_ENTRY_NEEDS_COPY),
|
2010-12-02 17:37:16 +00:00
|
|
|
("OVERCOMMIT vm_map_entry_delete: both cred %p", entry));
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
count = OFF_TO_IDX(size);
|
2003-11-05 05:48:22 +00:00
|
|
|
offidxstart = OFF_TO_IDX(entry->offset);
|
|
|
|
offidxend = offidxstart + count;
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
2003-11-05 05:48:22 +00:00
|
|
|
if (object->ref_count != 1 &&
|
|
|
|
((object->flags & (OBJ_NOSPLIT|OBJ_ONEMAPPING)) == OBJ_ONEMAPPING ||
|
2007-02-25 06:14:58 +00:00
|
|
|
object == kernel_object || object == kmem_object)) {
|
2003-11-05 05:48:22 +00:00
|
|
|
vm_object_collapse(object);
|
2011-06-29 16:40:41 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The option OBJPR_NOTMAPPED can be passed here
|
|
|
|
* because vm_map_delete() already performed
|
|
|
|
* pmap_remove() on the only mapping to this range
|
|
|
|
* of pages.
|
|
|
|
*/
|
|
|
|
vm_object_page_remove(object, offidxstart, offidxend,
|
|
|
|
OBJPR_NOTMAPPED);
|
2003-11-05 05:48:22 +00:00
|
|
|
if (object->type == OBJT_SWAP)
|
|
|
|
swap_pager_freespace(object, offidxstart, count);
|
|
|
|
if (offidxend >= object->size &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
offidxstart < object->size) {
|
|
|
|
size1 = object->size;
|
2003-11-05 05:48:22 +00:00
|
|
|
object->size = offidxstart;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (object->cred != NULL) {
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
size1 -= object->size;
|
|
|
|
KASSERT(object->charge >= ptoa(size1),
|
|
|
|
("vm_map_entry_delete: object->charge < 0"));
|
2010-12-02 17:37:16 +00:00
|
|
|
swap_release_by_cred(ptoa(size1), object->cred);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge -= ptoa(size1);
|
|
|
|
}
|
|
|
|
}
|
2003-11-05 05:48:22 +00:00
|
|
|
}
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
2009-02-08 20:39:17 +00:00
|
|
|
} else
|
|
|
|
entry->object.vm_object = NULL;
|
2010-09-18 15:03:31 +00:00
|
|
|
if (map->system_map)
|
|
|
|
vm_map_entry_deallocate(entry, TRUE);
|
|
|
|
else {
|
|
|
|
entry->next = curthread->td_map_def_user;
|
|
|
|
curthread->td_map_def_user = entry;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_delete: [ internal use only ]
|
|
|
|
*
|
|
|
|
* Deallocates the given address range from the target
|
|
|
|
* map.
|
|
|
|
*/
|
|
|
|
int
|
2009-02-24 20:57:43 +00:00
|
|
|
vm_map_delete(vm_map_t map, vm_offset_t start, vm_offset_t end)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t first_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(map);
|
2013-11-20 09:03:48 +00:00
|
|
|
if (start == end)
|
|
|
|
return (KERN_SUCCESS);
|
2009-02-24 20:43:29 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Find the start of the region, and clip it
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-04-04 07:11:02 +00:00
|
|
|
if (!vm_map_lookup_entry(map, start, &first_entry))
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = first_entry->next;
|
1999-04-04 07:11:02 +00:00
|
|
|
else {
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = first_entry;
|
|
|
|
vm_map_clip_start(map, entry, start);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Step through all entries in this region
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
while ((entry != &map->header) && (entry->start < end)) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t next;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-06-11 05:24:22 +00:00
|
|
|
/*
|
|
|
|
* Wait for wiring or unwiring of an entry to complete.
|
2004-08-16 03:11:09 +00:00
|
|
|
* Also wait for any system wirings to disappear on
|
|
|
|
* user maps.
|
2002-06-11 05:24:22 +00:00
|
|
|
*/
|
2004-08-16 03:11:09 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_IN_TRANSITION) != 0 ||
|
|
|
|
(vm_map_pmap(map) != kernel_pmap &&
|
|
|
|
vm_map_entry_system_wired_count(entry) != 0)) {
|
2002-06-11 05:24:22 +00:00
|
|
|
unsigned int last_timestamp;
|
|
|
|
vm_offset_t saved_start;
|
|
|
|
vm_map_entry_t tmp_entry;
|
|
|
|
|
|
|
|
saved_start = entry->start;
|
|
|
|
entry->eflags |= MAP_ENTRY_NEEDS_WAKEUP;
|
|
|
|
last_timestamp = map->timestamp;
|
2007-11-07 21:56:58 +00:00
|
|
|
(void) vm_map_unlock_and_wait(map, 0);
|
2002-06-11 05:24:22 +00:00
|
|
|
vm_map_lock(map);
|
|
|
|
if (last_timestamp + 1 != map->timestamp) {
|
|
|
|
/*
|
|
|
|
* Look again for the entry because the map was
|
|
|
|
* modified while it was unlocked.
|
|
|
|
* Specifically, the entry may have been
|
|
|
|
* clipped, merged, or deleted.
|
|
|
|
*/
|
|
|
|
if (!vm_map_lookup_entry(map, saved_start,
|
|
|
|
&tmp_entry))
|
|
|
|
entry = tmp_entry->next;
|
|
|
|
else {
|
|
|
|
entry = tmp_entry;
|
|
|
|
vm_map_clip_start(map, entry,
|
|
|
|
saved_start);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
continue;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_clip_end(map, entry, end);
|
|
|
|
|
1998-04-29 04:28:22 +00:00
|
|
|
next = entry->next;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Unwire before removing addresses from the pmap; otherwise,
|
|
|
|
* unwiring will put the entries back in the pmap.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1998-04-29 04:28:22 +00:00
|
|
|
if (entry->wired_count != 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_unwire(map, entry);
|
1998-04-29 04:28:22 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2003-11-05 05:48:22 +00:00
|
|
|
pmap_remove(map->pmap, entry->start, entry->end);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
2009-02-24 20:23:16 +00:00
|
|
|
* Delete the entry only after removing all pmap
|
|
|
|
* entries pointing to its pages. (Otherwise, its
|
|
|
|
* page frames may be reallocated, and any modify bits
|
|
|
|
* will be set in the wrong object!)
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_entry_delete(map, entry);
|
|
|
|
entry = next;
|
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_remove:
|
|
|
|
*
|
|
|
|
* Remove the given address range from the target map.
|
|
|
|
* This is the exported form of vm_map_delete.
|
|
|
|
*/
|
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_remove(vm_map_t map, vm_offset_t start, vm_offset_t end)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2004-08-14 18:57:41 +00:00
|
|
|
int result;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock(map);
|
|
|
|
VM_MAP_RANGE_CHECK(map, start, end);
|
2009-02-24 20:57:43 +00:00
|
|
|
result = vm_map_delete(map, start, end);
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_unlock(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (result);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_check_protection:
|
|
|
|
*
|
2003-01-20 17:46:48 +00:00
|
|
|
* Assert that the target map allows the specified privilege on the
|
|
|
|
* entire address region given. The entire region must be allocated.
|
|
|
|
*
|
|
|
|
* WARNING! This code does not and should not check whether the
|
|
|
|
* contents of the region is accessible. For example a smaller file
|
|
|
|
* might be mapped into a larger address space.
|
|
|
|
*
|
|
|
|
* NOTE! This code is also called by munmap().
|
2003-11-10 01:37:40 +00:00
|
|
|
*
|
|
|
|
* The map must be locked. A read lock is sufficient.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
boolean_t
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_check_protection(vm_map_t map, vm_offset_t start, vm_offset_t end,
|
|
|
|
vm_prot_t protection)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_entry_t tmp_entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2003-11-10 01:37:40 +00:00
|
|
|
if (!vm_map_lookup_entry(map, start, &tmp_entry))
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
entry = tmp_entry;
|
|
|
|
|
|
|
|
while (start < end) {
|
2003-11-10 01:37:40 +00:00
|
|
|
if (entry == &map->header)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* No holes allowed!
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2003-11-10 01:37:40 +00:00
|
|
|
if (start < entry->start)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Check protection associated with entry.
|
|
|
|
*/
|
2003-11-10 01:37:40 +00:00
|
|
|
if ((entry->protection & protection) != protection)
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (FALSE);
|
1994-05-24 10:09:53 +00:00
|
|
|
/* go to next entry */
|
|
|
|
start = entry->end;
|
|
|
|
entry = entry->next;
|
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (TRUE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_copy_entry:
|
|
|
|
*
|
|
|
|
* Copies the contents of the source entry to the destination
|
|
|
|
* entry. The entries *must* be aligned properly.
|
|
|
|
*/
|
1995-12-14 09:55:16 +00:00
|
|
|
static void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_copy_entry(
|
|
|
|
vm_map_t src_map,
|
|
|
|
vm_map_t dst_map,
|
2003-11-03 16:14:45 +00:00
|
|
|
vm_map_entry_t src_entry,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_map_entry_t dst_entry,
|
|
|
|
vm_ooffset_t *fork_charge)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_object_t src_object;
|
2012-02-23 21:07:16 +00:00
|
|
|
vm_map_entry_t fake_entry;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_offset_t size;
|
2010-12-02 17:37:16 +00:00
|
|
|
struct ucred *cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
int charged;
|
1998-04-29 04:28:22 +00:00
|
|
|
|
2009-02-24 20:43:29 +00:00
|
|
|
VM_MAP_ASSERT_LOCKED(dst_map);
|
|
|
|
|
1999-02-07 21:48:23 +00:00
|
|
|
if ((dst_entry->eflags|src_entry->eflags) & MAP_ENTRY_IS_SUB_MAP)
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
|
|
|
|
2014-05-13 13:20:23 +00:00
|
|
|
if (src_entry->wired_count == 0 ||
|
|
|
|
(src_entry->protection & VM_PROT_WRITE) == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If the source entry is marked needs_copy, it is already
|
|
|
|
* write-protected.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2014-05-10 19:47:00 +00:00
|
|
|
if ((src_entry->eflags & MAP_ENTRY_NEEDS_COPY) == 0 &&
|
|
|
|
(src_entry->protection & VM_PROT_WRITE) != 0) {
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
pmap_protect(src_map->pmap,
|
|
|
|
src_entry->start,
|
|
|
|
src_entry->end,
|
|
|
|
src_entry->protection & ~VM_PROT_WRITE);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-05-18 03:38:05 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make a copy of the object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
size = src_entry->end - src_entry->start;
|
1999-01-28 00:57:57 +00:00
|
|
|
if ((src_object = src_entry->object.vm_object) != NULL) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(src_object);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
charged = ENTRY_CHARGED(src_entry);
|
1998-04-29 04:28:22 +00:00
|
|
|
if ((src_object->handle == NULL) &&
|
|
|
|
(src_object->type == OBJT_DEFAULT ||
|
|
|
|
src_object->type == OBJT_SWAP)) {
|
|
|
|
vm_object_collapse(src_object);
|
1998-05-04 17:12:53 +00:00
|
|
|
if ((src_object->flags & (OBJ_NOSPLIT|OBJ_ONEMAPPING)) == OBJ_ONEMAPPING) {
|
2002-06-02 23:54:09 +00:00
|
|
|
vm_object_split(src_entry);
|
1998-04-29 04:28:22 +00:00
|
|
|
src_object = src_entry->object.vm_object;
|
|
|
|
}
|
|
|
|
}
|
2003-11-02 21:30:10 +00:00
|
|
|
vm_object_reference_locked(src_object);
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_clear_flag(src_object, OBJ_ONEMAPPING);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (src_entry->cred != NULL &&
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
!(src_entry->eflags & MAP_ENTRY_NEEDS_COPY)) {
|
2010-12-02 17:37:16 +00:00
|
|
|
KASSERT(src_object->cred == NULL,
|
|
|
|
("OVERCOMMIT: vm_map_copy_entry: cred %p",
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
src_object));
|
2010-12-02 17:37:16 +00:00
|
|
|
src_object->cred = src_entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
src_object->charge = size;
|
|
|
|
}
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(src_object);
|
1998-04-29 04:28:22 +00:00
|
|
|
dst_entry->object.vm_object = src_object;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
if (charged) {
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = curthread->td_ucred;
|
|
|
|
crhold(cred);
|
|
|
|
dst_entry->cred = cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
*fork_charge += size;
|
|
|
|
if (!(src_entry->eflags &
|
|
|
|
MAP_ENTRY_NEEDS_COPY)) {
|
2010-12-02 17:37:16 +00:00
|
|
|
crhold(cred);
|
|
|
|
src_entry->cred = cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
*fork_charge += size;
|
|
|
|
}
|
|
|
|
}
|
1997-01-16 04:16:22 +00:00
|
|
|
src_entry->eflags |= (MAP_ENTRY_COW|MAP_ENTRY_NEEDS_COPY);
|
|
|
|
dst_entry->eflags |= (MAP_ENTRY_COW|MAP_ENTRY_NEEDS_COPY);
|
1996-05-18 03:38:05 +00:00
|
|
|
dst_entry->offset = src_entry->offset;
|
2012-02-23 21:07:16 +00:00
|
|
|
if (src_entry->eflags & MAP_ENTRY_VN_WRITECNT) {
|
|
|
|
/*
|
|
|
|
* MAP_ENTRY_VN_WRITECNT cannot
|
|
|
|
* indicate write reference from
|
|
|
|
* src_entry, since the entry is
|
|
|
|
* marked as needs copy. Allocate a
|
|
|
|
* fake entry that is used to
|
|
|
|
* decrement object->un_pager.vnp.writecount
|
|
|
|
* at the appropriate time. Attach
|
|
|
|
* fake_entry to the deferred list.
|
|
|
|
*/
|
|
|
|
fake_entry = vm_map_entry_create(dst_map);
|
|
|
|
fake_entry->eflags = MAP_ENTRY_VN_WRITECNT;
|
|
|
|
src_entry->eflags &= ~MAP_ENTRY_VN_WRITECNT;
|
|
|
|
vm_object_reference(src_object);
|
|
|
|
fake_entry->object.vm_object = src_object;
|
|
|
|
fake_entry->start = src_entry->start;
|
|
|
|
fake_entry->end = src_entry->end;
|
|
|
|
fake_entry->next = curthread->td_map_def_user;
|
|
|
|
curthread->td_map_def_user = fake_entry;
|
|
|
|
}
|
1996-05-18 03:38:05 +00:00
|
|
|
} else {
|
|
|
|
dst_entry->object.vm_object = NULL;
|
|
|
|
dst_entry->offset = 0;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (src_entry->cred != NULL) {
|
|
|
|
dst_entry->cred = curthread->td_ucred;
|
|
|
|
crhold(dst_entry->cred);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
*fork_charge += size;
|
|
|
|
}
|
1996-05-18 03:38:05 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
pmap_copy(dst_map->pmap, src_map->pmap, dst_entry->start,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
dst_entry->end - dst_entry->start, src_entry->start);
|
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2014-05-13 13:20:23 +00:00
|
|
|
* We don't want to make writeable wired pages copy-on-write.
|
|
|
|
* Immediately copy these pages into the new map by simulating
|
|
|
|
* page faults. The new pages are pageable.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2009-07-03 22:17:37 +00:00
|
|
|
vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry,
|
|
|
|
fork_charge);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2004-06-24 22:43:46 +00:00
|
|
|
/*
|
|
|
|
* vmspace_map_entry_forked:
|
|
|
|
* Update the newly-forked vmspace each time a map entry is inherited
|
|
|
|
* or copied. The values for vm_dsize and vm_tsize are approximate
|
|
|
|
* (and mostly-obsolete ideas in the face of mmap(2) et al.)
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vmspace_map_entry_forked(const struct vmspace *vm1, struct vmspace *vm2,
|
|
|
|
vm_map_entry_t entry)
|
|
|
|
{
|
|
|
|
vm_size_t entrysize;
|
|
|
|
vm_offset_t newend;
|
|
|
|
|
|
|
|
entrysize = entry->end - entry->start;
|
|
|
|
vm2->vm_map.size += entrysize;
|
|
|
|
if (entry->eflags & (MAP_ENTRY_GROWS_DOWN | MAP_ENTRY_GROWS_UP)) {
|
|
|
|
vm2->vm_ssize += btoc(entrysize);
|
|
|
|
} else if (entry->start >= (vm_offset_t)vm1->vm_daddr &&
|
|
|
|
entry->start < (vm_offset_t)vm1->vm_daddr + ctob(vm1->vm_dsize)) {
|
2004-06-28 19:58:39 +00:00
|
|
|
newend = MIN(entry->end,
|
2004-06-24 22:43:46 +00:00
|
|
|
(vm_offset_t)vm1->vm_daddr + ctob(vm1->vm_dsize));
|
2004-06-28 19:58:39 +00:00
|
|
|
vm2->vm_dsize += btoc(newend - entry->start);
|
2004-06-24 22:43:46 +00:00
|
|
|
} else if (entry->start >= (vm_offset_t)vm1->vm_taddr &&
|
|
|
|
entry->start < (vm_offset_t)vm1->vm_taddr + ctob(vm1->vm_tsize)) {
|
2004-06-28 19:58:39 +00:00
|
|
|
newend = MIN(entry->end,
|
2004-06-24 22:43:46 +00:00
|
|
|
(vm_offset_t)vm1->vm_taddr + ctob(vm1->vm_tsize));
|
|
|
|
vm2->vm_tsize += btoc(newend - entry->start);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vmspace_fork:
|
|
|
|
* Create a new process vmspace structure and vm_map
|
|
|
|
* based on those of an existing process. The new map
|
|
|
|
* is based on the old map, according to the inheritance
|
|
|
|
* values on the regions in that map.
|
|
|
|
*
|
2004-06-24 22:43:46 +00:00
|
|
|
* XXX It might be worth coalescing the entries added to the new vmspace.
|
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* The source map must not be locked.
|
|
|
|
*/
|
|
|
|
struct vmspace *
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vmspace_fork(struct vmspace *vm1, vm_ooffset_t *fork_charge)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
struct vmspace *vm2;
|
2012-02-25 17:49:59 +00:00
|
|
|
vm_map_t new_map, old_map;
|
|
|
|
vm_map_entry_t new_entry, old_entry;
|
1996-03-02 02:54:24 +00:00
|
|
|
vm_object_t object;
|
2009-02-08 19:55:03 +00:00
|
|
|
int locked;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2012-02-25 17:49:59 +00:00
|
|
|
old_map = &vm1->vm_map;
|
|
|
|
/* Copy immutable fields of vm1 to vm2. */
|
2013-09-20 17:06:49 +00:00
|
|
|
vm2 = vmspace_alloc(old_map->min_offset, old_map->max_offset, NULL);
|
2007-11-05 11:36:16 +00:00
|
|
|
if (vm2 == NULL)
|
2012-02-25 17:49:59 +00:00
|
|
|
return (NULL);
|
2004-06-24 22:43:46 +00:00
|
|
|
vm2->vm_taddr = vm1->vm_taddr;
|
|
|
|
vm2->vm_daddr = vm1->vm_daddr;
|
|
|
|
vm2->vm_maxsaddr = vm1->vm_maxsaddr;
|
2012-02-25 17:49:59 +00:00
|
|
|
vm_map_lock(old_map);
|
|
|
|
if (old_map->busy)
|
|
|
|
vm_map_wait_busy(old_map);
|
|
|
|
new_map = &vm2->vm_map;
|
2009-02-08 19:55:03 +00:00
|
|
|
locked = vm_map_trylock(new_map); /* trylock to silence WITNESS */
|
|
|
|
KASSERT(locked, ("vmspace_fork: lock failed"));
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
old_entry = old_map->header.next;
|
|
|
|
|
|
|
|
while (old_entry != &old_map->header) {
|
1997-01-16 04:16:22 +00:00
|
|
|
if (old_entry->eflags & MAP_ENTRY_IS_SUB_MAP)
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("vm_map_fork: encountered a submap");
|
|
|
|
|
|
|
|
switch (old_entry->inheritance) {
|
|
|
|
case VM_INHERIT_NONE:
|
|
|
|
break;
|
|
|
|
|
1997-07-27 04:44:12 +00:00
|
|
|
case VM_INHERIT_SHARE:
|
|
|
|
/*
|
|
|
|
* Clone the entry, creating the shared object if necessary.
|
|
|
|
*/
|
|
|
|
object = old_entry->object.vm_object;
|
|
|
|
if (object == NULL) {
|
|
|
|
object = vm_object_allocate(OBJT_DEFAULT,
|
1997-12-19 15:31:13 +00:00
|
|
|
atop(old_entry->end - old_entry->start));
|
1997-07-27 04:44:12 +00:00
|
|
|
old_entry->object.vm_object = object;
|
2005-07-20 18:41:08 +00:00
|
|
|
old_entry->offset = 0;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (old_entry->cred != NULL) {
|
|
|
|
object->cred = old_entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge = old_entry->end -
|
|
|
|
old_entry->start;
|
2010-12-02 17:37:16 +00:00
|
|
|
old_entry->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
1999-05-28 03:39:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add the reference before calling vm_object_shadow
|
|
|
|
* to insure that a shadow object is created.
|
|
|
|
*/
|
|
|
|
vm_object_reference(object);
|
|
|
|
if (old_entry->eflags & MAP_ENTRY_NEEDS_COPY) {
|
1997-01-31 04:10:41 +00:00
|
|
|
vm_object_shadow(&old_entry->object.vm_object,
|
2011-02-04 21:49:24 +00:00
|
|
|
&old_entry->offset,
|
|
|
|
old_entry->end - old_entry->start);
|
1997-01-31 04:10:41 +00:00
|
|
|
old_entry->eflags &= ~MAP_ENTRY_NEEDS_COPY;
|
2001-03-09 18:25:54 +00:00
|
|
|
/* Transfer the second reference too. */
|
|
|
|
vm_object_reference(
|
|
|
|
old_entry->object.vm_object);
|
2009-02-08 20:00:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* As in vm_map_simplify_entry(), the
|
2009-02-08 20:52:09 +00:00
|
|
|
* vnode lock will not be acquired in
|
2009-02-08 20:00:33 +00:00
|
|
|
* this call to vm_object_deallocate().
|
|
|
|
*/
|
2001-03-09 18:25:54 +00:00
|
|
|
vm_object_deallocate(object);
|
1997-01-31 04:10:41 +00:00
|
|
|
object = old_entry->object.vm_object;
|
|
|
|
}
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(object);
|
1998-08-24 08:39:39 +00:00
|
|
|
vm_object_clear_flag(object, OBJ_ONEMAPPING);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (old_entry->cred != NULL) {
|
|
|
|
KASSERT(object->cred == NULL, ("vmspace_fork both cred"));
|
|
|
|
object->cred = old_entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
object->charge = old_entry->end - old_entry->start;
|
2010-12-02 17:37:16 +00:00
|
|
|
old_entry->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
2013-04-09 10:04:10 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Assert the correct state of the vnode
|
|
|
|
* v_writecount while the object is locked, to
|
|
|
|
* not relock it later for the assertion
|
|
|
|
* correctness.
|
|
|
|
*/
|
|
|
|
if (old_entry->eflags & MAP_ENTRY_VN_WRITECNT &&
|
|
|
|
object->type == OBJT_VNODE) {
|
|
|
|
KASSERT(((struct vnode *)object->handle)->
|
|
|
|
v_writecount > 0,
|
|
|
|
("vmspace_fork: v_writecount %p", object));
|
|
|
|
KASSERT(object->un_pager.vnp.writemappings > 0,
|
|
|
|
("vmspace_fork: vnp.writecount %p",
|
|
|
|
object));
|
|
|
|
}
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(object);
|
1997-01-22 01:34:48 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-03-27 23:46:04 +00:00
|
|
|
* Clone the entry, referencing the shared object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(new_map);
|
|
|
|
*new_entry = *old_entry;
|
2009-02-08 19:41:08 +00:00
|
|
|
new_entry->eflags &= ~(MAP_ENTRY_USER_WIRED |
|
|
|
|
MAP_ENTRY_IN_TRANSITION);
|
2013-07-11 05:55:08 +00:00
|
|
|
new_entry->wiring_thread = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry->wired_count = 0;
|
2012-02-23 21:07:16 +00:00
|
|
|
if (new_entry->eflags & MAP_ENTRY_VN_WRITECNT) {
|
|
|
|
vnode_pager_update_writecount(object,
|
|
|
|
new_entry->start, new_entry->end);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Insert the entry into the new map -- we know we're
|
|
|
|
* inserting at the end of the new map.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_entry_link(new_map, new_map->header.prev,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_entry);
|
2004-06-24 22:43:46 +00:00
|
|
|
vmspace_map_entry_forked(vm1, vm2, new_entry);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Update the physical map
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
pmap_copy(new_map->pmap, old_map->pmap,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_entry->start,
|
|
|
|
(old_entry->end - old_entry->start),
|
|
|
|
old_entry->start);
|
1994-05-24 10:09:53 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case VM_INHERIT_COPY:
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Clone the entry and link into the map.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
new_entry = vm_map_entry_create(new_map);
|
|
|
|
*new_entry = *old_entry;
|
2012-02-23 21:07:16 +00:00
|
|
|
/*
|
|
|
|
* Copied entry is COW over the old object.
|
|
|
|
*/
|
2009-02-08 19:41:08 +00:00
|
|
|
new_entry->eflags &= ~(MAP_ENTRY_USER_WIRED |
|
2012-02-23 21:07:16 +00:00
|
|
|
MAP_ENTRY_IN_TRANSITION | MAP_ENTRY_VN_WRITECNT);
|
2013-07-11 05:55:08 +00:00
|
|
|
new_entry->wiring_thread = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
new_entry->wired_count = 0;
|
|
|
|
new_entry->object.vm_object = NULL;
|
2010-12-02 17:37:16 +00:00
|
|
|
new_entry->cred = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
vm_map_entry_link(new_map, new_map->header.prev,
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
new_entry);
|
2004-06-24 22:43:46 +00:00
|
|
|
vmspace_map_entry_forked(vm1, vm2, new_entry);
|
1996-01-19 04:00:31 +00:00
|
|
|
vm_map_copy_entry(old_map, new_map, old_entry,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
new_entry, fork_charge);
|
1994-05-24 10:09:53 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
old_entry = old_entry->next;
|
|
|
|
}
|
2012-02-23 21:07:16 +00:00
|
|
|
/*
|
|
|
|
* Use inlined vm_map_unlock() to postpone handling the deferred
|
|
|
|
* map entries, which cannot be done until both old_map and
|
|
|
|
* new_map locks are released.
|
|
|
|
*/
|
|
|
|
sx_xunlock(&old_map->lock);
|
2012-02-25 17:49:59 +00:00
|
|
|
sx_xunlock(&new_map->lock);
|
2012-02-23 21:07:16 +00:00
|
|
|
vm_map_process_deferred();
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (vm2);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
int
|
2003-09-27 22:28:14 +00:00
|
|
|
vm_map_stack(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize,
|
|
|
|
vm_prot_t prot, vm_prot_t max, int cow)
|
2014-06-09 03:37:41 +00:00
|
|
|
{
|
|
|
|
vm_size_t growsize, init_ssize;
|
|
|
|
rlim_t lmemlim, vmemlim;
|
|
|
|
int rv;
|
|
|
|
|
|
|
|
growsize = sgrowsiz;
|
|
|
|
init_ssize = (max_ssize < growsize) ? max_ssize : growsize;
|
|
|
|
vm_map_lock(map);
|
2015-06-10 10:48:12 +00:00
|
|
|
lmemlim = lim_cur(curthread, RLIMIT_MEMLOCK);
|
|
|
|
vmemlim = lim_cur(curthread, RLIMIT_VMEM);
|
2014-06-09 03:37:41 +00:00
|
|
|
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
|
|
|
|
if (ptoa(pmap_wired_count(map->pmap)) + init_ssize > lmemlim) {
|
|
|
|
rv = KERN_NO_SPACE;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* If we would blow our VMEM resource limit, no go */
|
|
|
|
if (map->size + init_ssize > vmemlim) {
|
|
|
|
rv = KERN_NO_SPACE;
|
|
|
|
goto out;
|
|
|
|
}
|
2014-06-15 07:52:59 +00:00
|
|
|
rv = vm_map_stack_locked(map, addrbos, max_ssize, growsize, prot,
|
2014-06-09 03:37:41 +00:00
|
|
|
max, cow);
|
|
|
|
out:
|
|
|
|
vm_map_unlock(map);
|
|
|
|
return (rv);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
vm_map_stack_locked(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize,
|
|
|
|
vm_size_t growsize, vm_prot_t prot, vm_prot_t max, int cow)
|
1999-06-17 00:39:26 +00:00
|
|
|
{
|
2003-09-27 22:28:14 +00:00
|
|
|
vm_map_entry_t new_entry, prev_entry;
|
|
|
|
vm_offset_t bot, top;
|
2014-06-09 03:37:41 +00:00
|
|
|
vm_size_t init_ssize;
|
2003-09-27 22:28:14 +00:00
|
|
|
int orient, rv;
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2003-09-27 22:28:14 +00:00
|
|
|
/*
|
|
|
|
* The stack orientation is piggybacked with the cow argument.
|
|
|
|
* Extract it into orient and mask the cow argument so that we
|
|
|
|
* don't pass it around further.
|
|
|
|
* NOTE: We explicitly allow bi-directional stacks.
|
|
|
|
*/
|
|
|
|
orient = cow & (MAP_STACK_GROWS_DOWN|MAP_STACK_GROWS_UP);
|
|
|
|
KASSERT(orient != 0, ("No stack grow direction"));
|
|
|
|
|
2008-01-04 04:33:13 +00:00
|
|
|
if (addrbos < vm_map_min(map) ||
|
|
|
|
addrbos > vm_map_max(map) ||
|
|
|
|
addrbos + max_ssize < addrbos)
|
2003-07-01 03:57:25 +00:00
|
|
|
return (KERN_NO_SPACE);
|
2003-09-27 22:28:14 +00:00
|
|
|
|
2012-09-03 09:34:46 +00:00
|
|
|
init_ssize = (max_ssize < growsize) ? max_ssize : growsize;
|
1999-06-17 00:39:26 +00:00
|
|
|
|
|
|
|
/* If addr is already mapped, no go */
|
2014-06-09 03:37:41 +00:00
|
|
|
if (vm_map_lookup_entry(map, addrbos, &prev_entry))
|
2002-06-26 03:13:46 +00:00
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
|
2003-09-27 22:28:14 +00:00
|
|
|
/*
|
|
|
|
* If we can't accomodate max_ssize in the current mapping, no go.
|
|
|
|
* However, we need to be aware that subsequent user mappings might
|
|
|
|
* map into the space we have reserved for stack, and currently this
|
2003-11-03 16:14:45 +00:00
|
|
|
* space is not protected.
|
2003-09-27 22:28:14 +00:00
|
|
|
*
|
|
|
|
* Hopefully we will at least detect this condition when we try to
|
|
|
|
* grow the stack.
|
1999-06-17 00:39:26 +00:00
|
|
|
*/
|
|
|
|
if ((prev_entry->next != &map->header) &&
|
2014-06-09 03:37:41 +00:00
|
|
|
(prev_entry->next->start < addrbos + max_ssize))
|
1999-06-17 00:39:26 +00:00
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
|
2003-09-27 22:28:14 +00:00
|
|
|
/*
|
|
|
|
* We initially map a stack of only init_ssize. We will grow as
|
|
|
|
* needed later. Depending on the orientation of the stack (i.e.
|
|
|
|
* the grow direction) we either map at the top of the range, the
|
|
|
|
* bottom of the range or in the middle.
|
1999-06-17 00:39:26 +00:00
|
|
|
*
|
2003-09-27 22:28:14 +00:00
|
|
|
* Note: we would normally expect prot and max to be VM_PROT_ALL,
|
|
|
|
* and cow to be 0. Possibly we should eliminate these as input
|
|
|
|
* parameters, and just pass these values here in the insert call.
|
1999-06-17 00:39:26 +00:00
|
|
|
*/
|
2003-09-27 22:28:14 +00:00
|
|
|
if (orient == MAP_STACK_GROWS_DOWN)
|
|
|
|
bot = addrbos + max_ssize - init_ssize;
|
|
|
|
else if (orient == MAP_STACK_GROWS_UP)
|
|
|
|
bot = addrbos;
|
|
|
|
else
|
|
|
|
bot = round_page(addrbos + max_ssize/2 - init_ssize/2);
|
|
|
|
top = bot + init_ssize;
|
|
|
|
rv = vm_map_insert(map, NULL, 0, bot, top, prot, max, cow);
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2003-09-27 22:28:14 +00:00
|
|
|
/* Now set the avail_ssize amount. */
|
|
|
|
if (rv == KERN_SUCCESS) {
|
|
|
|
new_entry = prev_entry->next;
|
|
|
|
if (new_entry->end != top || new_entry->start != bot)
|
|
|
|
panic("Bad entry start/end for new stack entry");
|
|
|
|
|
|
|
|
new_entry->avail_ssize = max_ssize - init_ssize;
|
2014-06-19 16:26:16 +00:00
|
|
|
KASSERT((orient & MAP_STACK_GROWS_DOWN) == 0 ||
|
|
|
|
(new_entry->eflags & MAP_ENTRY_GROWS_DOWN) != 0,
|
|
|
|
("new entry lacks MAP_ENTRY_GROWS_DOWN"));
|
|
|
|
KASSERT((orient & MAP_STACK_GROWS_UP) == 0 ||
|
|
|
|
(new_entry->eflags & MAP_ENTRY_GROWS_UP) != 0,
|
|
|
|
("new entry lacks MAP_ENTRY_GROWS_UP"));
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (rv);
|
|
|
|
}
|
|
|
|
|
2010-11-14 17:53:52 +00:00
|
|
|
static int stack_guard_page = 0;
|
2014-06-28 03:56:17 +00:00
|
|
|
SYSCTL_INT(_security_bsd, OID_AUTO, stack_guard_page, CTLFLAG_RWTUN,
|
2010-11-14 17:53:52 +00:00
|
|
|
&stack_guard_page, 0,
|
|
|
|
"Insert stack guard page ahead of the growable segments.");
|
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
/* Attempts to grow a vm stack entry. Returns KERN_SUCCESS if the
|
|
|
|
* desired address is already mapped, or if we successfully grow
|
|
|
|
* the stack. Also returns KERN_SUCCESS if addr is outside the
|
|
|
|
* stack range (this is strange, but preserves compatibility with
|
|
|
|
* the grow function in vm_machdep.c).
|
|
|
|
*/
|
|
|
|
int
|
2003-08-30 21:25:23 +00:00
|
|
|
vm_map_growstack(struct proc *p, vm_offset_t addr)
|
1999-06-17 00:39:26 +00:00
|
|
|
{
|
2003-08-30 21:25:23 +00:00
|
|
|
vm_map_entry_t next_entry, prev_entry;
|
|
|
|
vm_map_entry_t new_entry, stack_entry;
|
1999-06-17 00:39:26 +00:00
|
|
|
struct vmspace *vm = p->p_vmspace;
|
|
|
|
vm_map_t map = &vm->vm_map;
|
2003-08-30 21:25:23 +00:00
|
|
|
vm_offset_t end;
|
2012-09-03 09:34:46 +00:00
|
|
|
vm_size_t growsize;
|
2003-08-30 21:25:23 +00:00
|
|
|
size_t grow_amount, max_grow;
|
2012-12-18 07:35:01 +00:00
|
|
|
rlim_t lmemlim, stacklim, vmemlim;
|
2003-08-30 21:25:23 +00:00
|
|
|
int is_procstack, rv;
|
2010-12-02 17:37:16 +00:00
|
|
|
struct ucred *cred;
|
2011-04-05 20:23:59 +00:00
|
|
|
#ifdef notyet
|
|
|
|
uint64_t limit;
|
|
|
|
#endif
|
2011-07-06 20:06:44 +00:00
|
|
|
#ifdef RACCT
|
2011-04-05 20:23:59 +00:00
|
|
|
int error;
|
2011-07-06 20:06:44 +00:00
|
|
|
#endif
|
2001-07-04 16:20:28 +00:00
|
|
|
|
2015-06-10 10:48:12 +00:00
|
|
|
lmemlim = lim_cur(curthread, RLIMIT_MEMLOCK);
|
|
|
|
stacklim = lim_cur(curthread, RLIMIT_STACK);
|
|
|
|
vmemlim = lim_cur(curthread, RLIMIT_VMEM);
|
1999-06-17 00:39:26 +00:00
|
|
|
Retry:
|
2004-02-04 21:52:57 +00:00
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
vm_map_lock_read(map);
|
|
|
|
|
|
|
|
/* If addr is already in the entry range, no need to grow.*/
|
|
|
|
if (vm_map_lookup_entry(map, addr, &prev_entry)) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_SUCCESS);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
next_entry = prev_entry->next;
|
|
|
|
if (!(prev_entry->eflags & MAP_ENTRY_GROWS_UP)) {
|
|
|
|
/*
|
|
|
|
* This entry does not grow upwards. Since the address lies
|
|
|
|
* beyond this entry, the next entry (if one exists) has to
|
|
|
|
* be a downward growable entry. The entry list header is
|
|
|
|
* never a growable entry, so it suffices to check the flags.
|
|
|
|
*/
|
|
|
|
if (!(next_entry->eflags & MAP_ENTRY_GROWS_DOWN)) {
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
return (KERN_SUCCESS);
|
|
|
|
}
|
|
|
|
stack_entry = next_entry;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* This entry grows upward. If the next entry does not at
|
|
|
|
* least grow downwards, this is the entry we need to grow.
|
|
|
|
* otherwise we have two possible choices and we have to
|
|
|
|
* select one.
|
|
|
|
*/
|
|
|
|
if (next_entry->eflags & MAP_ENTRY_GROWS_DOWN) {
|
|
|
|
/*
|
|
|
|
* We have two choices; grow the entry closest to
|
|
|
|
* the address to minimize the amount of growth.
|
|
|
|
*/
|
|
|
|
if (addr - prev_entry->end <= next_entry->start - addr)
|
|
|
|
stack_entry = prev_entry;
|
|
|
|
else
|
|
|
|
stack_entry = next_entry;
|
|
|
|
} else
|
|
|
|
stack_entry = prev_entry;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (stack_entry == next_entry) {
|
|
|
|
KASSERT(stack_entry->eflags & MAP_ENTRY_GROWS_DOWN, ("foo"));
|
|
|
|
KASSERT(addr < stack_entry->start, ("foo"));
|
|
|
|
end = (prev_entry != &map->header) ? prev_entry->end :
|
|
|
|
stack_entry->start - stack_entry->avail_ssize;
|
|
|
|
grow_amount = roundup(stack_entry->start - addr, PAGE_SIZE);
|
|
|
|
max_grow = stack_entry->start - end;
|
|
|
|
} else {
|
|
|
|
KASSERT(stack_entry->eflags & MAP_ENTRY_GROWS_UP, ("foo"));
|
2003-10-31 07:29:28 +00:00
|
|
|
KASSERT(addr >= stack_entry->end, ("foo"));
|
2003-08-30 21:25:23 +00:00
|
|
|
end = (next_entry != &map->header) ? next_entry->start :
|
|
|
|
stack_entry->end + stack_entry->avail_ssize;
|
2003-09-27 22:28:14 +00:00
|
|
|
grow_amount = roundup(addr + 1 - stack_entry->end, PAGE_SIZE);
|
2003-08-30 21:25:23 +00:00
|
|
|
max_grow = end - stack_entry->end;
|
|
|
|
}
|
1999-06-17 00:39:26 +00:00
|
|
|
|
|
|
|
if (grow_amount > stack_entry->avail_ssize) {
|
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
/*
|
|
|
|
* If there is no longer enough space between the entries nogo, and
|
|
|
|
* adjust the available space. Note: this should only happen if the
|
|
|
|
* user has mapped into the stack area after the stack was created,
|
|
|
|
* and is probably an error.
|
1999-06-17 00:39:26 +00:00
|
|
|
*
|
2003-08-30 21:25:23 +00:00
|
|
|
* This also effectively destroys any guard page the user might have
|
|
|
|
* intended by limiting the stack size.
|
1999-06-17 00:39:26 +00:00
|
|
|
*/
|
2010-11-14 17:53:52 +00:00
|
|
|
if (grow_amount + (stack_guard_page ? PAGE_SIZE : 0) > max_grow) {
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1999-06-17 00:39:26 +00:00
|
|
|
goto Retry;
|
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
stack_entry->avail_ssize = max_grow;
|
1999-06-17 00:39:26 +00:00
|
|
|
|
|
|
|
vm_map_unlock(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
2015-07-02 15:22:13 +00:00
|
|
|
is_procstack = (addr >= (vm_offset_t)vm->vm_maxsaddr &&
|
|
|
|
addr < (vm_offset_t)p->p_sysent->sv_usrstack) ? 1 : 0;
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
/*
|
|
|
|
* If this is the main process stack, see if we're over the stack
|
|
|
|
* limit.
|
1999-06-17 00:39:26 +00:00
|
|
|
*/
|
2004-02-04 21:52:57 +00:00
|
|
|
if (is_procstack && (ctob(vm->vm_ssize) + grow_amount > stacklim)) {
|
1999-06-17 00:39:26 +00:00
|
|
|
vm_map_unlock_read(map);
|
2001-07-04 16:20:28 +00:00
|
|
|
return (KERN_NO_SPACE);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
2011-07-06 20:06:44 +00:00
|
|
|
#ifdef RACCT
|
2015-04-29 10:23:02 +00:00
|
|
|
if (racct_enable) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
if (is_procstack && racct_set(p, RACCT_STACK,
|
|
|
|
ctob(vm->vm_ssize) + grow_amount)) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
return (KERN_NO_SPACE);
|
|
|
|
}
|
2011-04-05 20:23:59 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
2011-07-06 20:06:44 +00:00
|
|
|
#endif
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2012-09-03 09:34:46 +00:00
|
|
|
/* Round up the grow amount modulo sgrowsiz */
|
|
|
|
growsize = sgrowsiz;
|
|
|
|
grow_amount = roundup(grow_amount, growsize);
|
2003-08-30 21:25:23 +00:00
|
|
|
if (grow_amount > stack_entry->avail_ssize)
|
1999-06-17 00:39:26 +00:00
|
|
|
grow_amount = stack_entry->avail_ssize;
|
2004-02-04 21:52:57 +00:00
|
|
|
if (is_procstack && (ctob(vm->vm_ssize) + grow_amount > stacklim)) {
|
2010-11-07 21:40:34 +00:00
|
|
|
grow_amount = trunc_page((vm_size_t)stacklim) -
|
|
|
|
ctob(vm->vm_ssize);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
2011-04-05 20:23:59 +00:00
|
|
|
#ifdef notyet
|
|
|
|
PROC_LOCK(p);
|
|
|
|
limit = racct_get_available(p, RACCT_STACK);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
if (is_procstack && (ctob(vm->vm_ssize) + grow_amount > limit))
|
|
|
|
grow_amount = limit - ctob(vm->vm_ssize);
|
|
|
|
#endif
|
2012-12-18 07:35:01 +00:00
|
|
|
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
|
2013-01-10 12:43:58 +00:00
|
|
|
if (ptoa(pmap_wired_count(map->pmap)) + grow_amount > lmemlim) {
|
2012-12-18 07:35:01 +00:00
|
|
|
vm_map_unlock_read(map);
|
|
|
|
rv = KERN_NO_SPACE;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
#ifdef RACCT
|
2015-04-29 10:23:02 +00:00
|
|
|
if (racct_enable) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
if (racct_set(p, RACCT_MEMLOCK,
|
|
|
|
ptoa(pmap_wired_count(map->pmap)) + grow_amount)) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
rv = KERN_NO_SPACE;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-12-18 07:35:01 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
2002-06-26 03:13:46 +00:00
|
|
|
/* If we would blow our VMEM resource limit, no go */
|
2004-02-04 21:52:57 +00:00
|
|
|
if (map->size + grow_amount > vmemlim) {
|
2002-06-26 03:13:46 +00:00
|
|
|
vm_map_unlock_read(map);
|
2011-04-05 20:23:59 +00:00
|
|
|
rv = KERN_NO_SPACE;
|
|
|
|
goto out;
|
2002-06-26 03:13:46 +00:00
|
|
|
}
|
2011-07-06 20:06:44 +00:00
|
|
|
#ifdef RACCT
|
2015-04-29 10:23:02 +00:00
|
|
|
if (racct_enable) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
if (racct_set(p, RACCT_VMEM, map->size + grow_amount)) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
rv = KERN_NO_SPACE;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-04-05 20:23:59 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
2011-07-06 20:06:44 +00:00
|
|
|
#endif
|
2002-06-26 03:13:46 +00:00
|
|
|
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1999-06-17 00:39:26 +00:00
|
|
|
goto Retry;
|
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
if (stack_entry == next_entry) {
|
|
|
|
/*
|
|
|
|
* Growing downward.
|
|
|
|
*/
|
|
|
|
/* Get the preliminary new entry start value */
|
|
|
|
addr = stack_entry->start - grow_amount;
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
/*
|
|
|
|
* If this puts us into the previous entry, cut back our
|
|
|
|
* growth to the available space. Also, see the note above.
|
|
|
|
*/
|
|
|
|
if (addr < end) {
|
|
|
|
stack_entry->avail_ssize = max_grow;
|
|
|
|
addr = end;
|
2010-11-14 17:53:52 +00:00
|
|
|
if (stack_guard_page)
|
|
|
|
addr += PAGE_SIZE;
|
2003-08-30 21:25:23 +00:00
|
|
|
}
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
rv = vm_map_insert(map, NULL, 0, addr, stack_entry->start,
|
2014-06-19 16:26:16 +00:00
|
|
|
next_entry->protection, next_entry->max_protection,
|
|
|
|
MAP_STACK_GROWS_DOWN);
|
1999-06-17 00:39:26 +00:00
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
/* Adjust the available stack space by the amount we grew. */
|
|
|
|
if (rv == KERN_SUCCESS) {
|
|
|
|
new_entry = prev_entry->next;
|
|
|
|
KASSERT(new_entry == stack_entry->prev, ("foo"));
|
|
|
|
KASSERT(new_entry->end == stack_entry->start, ("foo"));
|
|
|
|
KASSERT(new_entry->start == addr, ("foo"));
|
2014-06-19 16:26:16 +00:00
|
|
|
KASSERT((new_entry->eflags & MAP_ENTRY_GROWS_DOWN) !=
|
|
|
|
0, ("new entry lacks MAP_ENTRY_GROWS_DOWN"));
|
2003-08-30 21:25:23 +00:00
|
|
|
grow_amount = new_entry->end - new_entry->start;
|
|
|
|
new_entry->avail_ssize = stack_entry->avail_ssize -
|
|
|
|
grow_amount;
|
|
|
|
stack_entry->eflags &= ~MAP_ENTRY_GROWS_DOWN;
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
2003-08-30 21:25:23 +00:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Growing upward.
|
|
|
|
*/
|
|
|
|
addr = stack_entry->end + grow_amount;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this puts us into the next entry, cut back our growth
|
|
|
|
* to the available space. Also, see the note above.
|
|
|
|
*/
|
|
|
|
if (addr > end) {
|
|
|
|
stack_entry->avail_ssize = end - stack_entry->end;
|
|
|
|
addr = end;
|
2010-11-14 17:53:52 +00:00
|
|
|
if (stack_guard_page)
|
|
|
|
addr -= PAGE_SIZE;
|
2003-08-30 21:25:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
grow_amount = addr - stack_entry->end;
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = stack_entry->cred;
|
|
|
|
if (cred == NULL && stack_entry->object.vm_object != NULL)
|
|
|
|
cred = stack_entry->object.vm_object->cred;
|
|
|
|
if (cred != NULL && !swap_reserve_by_cred(grow_amount, cred))
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
rv = KERN_NO_SPACE;
|
2003-08-30 21:25:23 +00:00
|
|
|
/* Grow the underlying object if applicable. */
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
else if (stack_entry->object.vm_object == NULL ||
|
|
|
|
vm_object_coalesce(stack_entry->object.vm_object,
|
|
|
|
stack_entry->offset,
|
|
|
|
(vm_size_t)(stack_entry->end - stack_entry->start),
|
2010-12-02 17:37:16 +00:00
|
|
|
(vm_size_t)grow_amount, cred != NULL)) {
|
2003-10-31 07:29:28 +00:00
|
|
|
map->size += (addr - stack_entry->end);
|
2003-08-30 21:25:23 +00:00
|
|
|
/* Update the current entry. */
|
|
|
|
stack_entry->end = addr;
|
2003-11-04 06:48:58 +00:00
|
|
|
stack_entry->avail_ssize -= grow_amount;
|
2004-08-13 08:06:34 +00:00
|
|
|
vm_map_entry_resize_free(map, stack_entry);
|
2003-08-30 21:25:23 +00:00
|
|
|
rv = KERN_SUCCESS;
|
|
|
|
} else
|
|
|
|
rv = KERN_FAILURE;
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
2003-08-30 21:25:23 +00:00
|
|
|
if (rv == KERN_SUCCESS && is_procstack)
|
|
|
|
vm->vm_ssize += btoc(grow_amount);
|
|
|
|
|
1999-06-17 00:39:26 +00:00
|
|
|
vm_map_unlock(map);
|
2003-08-30 21:25:23 +00:00
|
|
|
|
2003-08-11 07:14:08 +00:00
|
|
|
/*
|
|
|
|
* Heed the MAP_WIREFUTURE flag if it was set for this process.
|
|
|
|
*/
|
2003-08-30 21:25:23 +00:00
|
|
|
if (rv == KERN_SUCCESS && (map->flags & MAP_WIREFUTURE)) {
|
|
|
|
vm_map_wire(map,
|
|
|
|
(stack_entry == next_entry) ? addr : addr - grow_amount,
|
|
|
|
(stack_entry == next_entry) ? stack_entry->start : addr,
|
|
|
|
(p->p_flag & P_SYSTEM)
|
|
|
|
? VM_MAP_WIRE_SYSTEM|VM_MAP_WIRE_NOHOLES
|
|
|
|
: VM_MAP_WIRE_USER|VM_MAP_WIRE_NOHOLES);
|
|
|
|
}
|
2003-08-11 07:14:08 +00:00
|
|
|
|
2011-04-05 20:23:59 +00:00
|
|
|
out:
|
2011-07-06 20:06:44 +00:00
|
|
|
#ifdef RACCT
|
2015-04-29 10:23:02 +00:00
|
|
|
if (racct_enable && rv != KERN_SUCCESS) {
|
2011-04-05 20:23:59 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
error = racct_set(p, RACCT_VMEM, map->size);
|
|
|
|
KASSERT(error == 0, ("decreasing RACCT_VMEM failed"));
|
2012-12-18 07:35:01 +00:00
|
|
|
if (!old_mlock) {
|
|
|
|
error = racct_set(p, RACCT_MEMLOCK,
|
2013-01-10 12:43:58 +00:00
|
|
|
ptoa(pmap_wired_count(map->pmap)));
|
2012-12-18 07:35:01 +00:00
|
|
|
KASSERT(error == 0, ("decreasing RACCT_MEMLOCK failed"));
|
|
|
|
}
|
2011-04-05 20:23:59 +00:00
|
|
|
error = racct_set(p, RACCT_STACK, ctob(vm->vm_ssize));
|
|
|
|
KASSERT(error == 0, ("decreasing RACCT_STACK failed"));
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
2011-07-06 20:06:44 +00:00
|
|
|
#endif
|
2011-04-05 20:23:59 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
return (rv);
|
1999-06-17 00:39:26 +00:00
|
|
|
}
|
|
|
|
|
1997-04-13 01:48:35 +00:00
|
|
|
/*
|
|
|
|
* Unshare the specified VM space for exec. If other processes are
|
|
|
|
* mapped to it, then create a new one. The new vmspace is null.
|
|
|
|
*/
|
2007-11-05 11:36:16 +00:00
|
|
|
int
|
2002-07-20 02:56:12 +00:00
|
|
|
vmspace_exec(struct proc *p, vm_offset_t minuser, vm_offset_t maxuser)
|
2001-07-04 20:15:18 +00:00
|
|
|
{
|
1997-04-13 01:48:35 +00:00
|
|
|
struct vmspace *oldvmspace = p->p_vmspace;
|
|
|
|
struct vmspace *newvmspace;
|
|
|
|
|
When exec_new_vmspace() decides that current vmspace cannot be reused
on execve(2), it calls vmspace_exec(), which frees the current
vmspace. The thread executing an exec syscall gets new vmspace
assigned, and old vmspace is freed if only referenced by the current
process. The free operation includes pmap_release(), which
de-constructs the paging structures used by hardware.
If the calling process is multithreaded, other threads are suspended
in the thread_suspend_check(), and need to be unsuspended and run to
be able to exit on successfull exec. Now, since the old vmspace is
destroyed, paging structures are invalid, threads are resumed on the
non-existent pmaps (page tables), which leads to triple fault on x86.
To fix, postpone the free of old vmspace until the threads are resumed
and exited. To avoid modifications to all image activators all of
which use exec_new_vmspace(), memoize the current (old) vmspace in
kern_execve(), and notify it about the need to call vmspace_free()
with a thread-private flag TDP_EXECVMSPC.
http://bugs.debian.org/743141
Reported by: Ivo De Decker <ivo.dedecker@ugent.be> through secteam
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
2014-05-20 09:19:35 +00:00
|
|
|
KASSERT((curthread->td_pflags & TDP_EXECVMSPC) == 0,
|
|
|
|
("vmspace_exec recursed"));
|
2013-09-20 17:06:49 +00:00
|
|
|
newvmspace = vmspace_alloc(minuser, maxuser, NULL);
|
2007-11-05 11:36:16 +00:00
|
|
|
if (newvmspace == NULL)
|
|
|
|
return (ENOMEM);
|
2004-07-24 07:40:35 +00:00
|
|
|
newvmspace->vm_swrss = oldvmspace->vm_swrss;
|
1997-04-13 01:48:35 +00:00
|
|
|
/*
|
|
|
|
* This code is written like this for prototype purposes. The
|
|
|
|
* goal is to avoid running down the vmspace here, but let the
|
|
|
|
* other process's that are still using the vmspace to finally
|
|
|
|
* run it down. Even though there is little or no chance of blocking
|
|
|
|
* here, it is a good idea to keep this form for future mods.
|
|
|
|
*/
|
2006-05-29 21:28:56 +00:00
|
|
|
PROC_VMSPACE_LOCK(p);
|
1997-04-13 01:48:35 +00:00
|
|
|
p->p_vmspace = newvmspace;
|
2006-05-29 21:28:56 +00:00
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
2008-03-12 10:12:01 +00:00
|
|
|
if (p == curthread->td_proc)
|
2001-09-12 08:38:13 +00:00
|
|
|
pmap_activate(curthread);
|
When exec_new_vmspace() decides that current vmspace cannot be reused
on execve(2), it calls vmspace_exec(), which frees the current
vmspace. The thread executing an exec syscall gets new vmspace
assigned, and old vmspace is freed if only referenced by the current
process. The free operation includes pmap_release(), which
de-constructs the paging structures used by hardware.
If the calling process is multithreaded, other threads are suspended
in the thread_suspend_check(), and need to be unsuspended and run to
be able to exit on successfull exec. Now, since the old vmspace is
destroyed, paging structures are invalid, threads are resumed on the
non-existent pmaps (page tables), which leads to triple fault on x86.
To fix, postpone the free of old vmspace until the threads are resumed
and exited. To avoid modifications to all image activators all of
which use exec_new_vmspace(), memoize the current (old) vmspace in
kern_execve(), and notify it about the need to call vmspace_free()
with a thread-private flag TDP_EXECVMSPC.
http://bugs.debian.org/743141
Reported by: Ivo De Decker <ivo.dedecker@ugent.be> through secteam
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
2014-05-20 09:19:35 +00:00
|
|
|
curthread->td_pflags |= TDP_EXECVMSPC;
|
2007-11-05 11:36:16 +00:00
|
|
|
return (0);
|
1997-04-13 01:48:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unshare the specified VM space for forcing COW. This
|
|
|
|
* is called by rfork, for the (RFMEM|RFPROC) == 0 case.
|
|
|
|
*/
|
2007-11-05 11:36:16 +00:00
|
|
|
int
|
2001-07-04 20:15:18 +00:00
|
|
|
vmspace_unshare(struct proc *p)
|
|
|
|
{
|
1997-04-13 01:48:35 +00:00
|
|
|
struct vmspace *oldvmspace = p->p_vmspace;
|
|
|
|
struct vmspace *newvmspace;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_ooffset_t fork_charge;
|
1997-04-13 01:48:35 +00:00
|
|
|
|
|
|
|
if (oldvmspace->vm_refcnt == 1)
|
2007-11-05 11:36:16 +00:00
|
|
|
return (0);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
fork_charge = 0;
|
|
|
|
newvmspace = vmspace_fork(oldvmspace, &fork_charge);
|
2007-11-05 11:36:16 +00:00
|
|
|
if (newvmspace == NULL)
|
|
|
|
return (ENOMEM);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (!swap_reserve_by_cred(fork_charge, p->p_ucred)) {
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vmspace_free(newvmspace);
|
|
|
|
return (ENOMEM);
|
|
|
|
}
|
2006-05-29 21:28:56 +00:00
|
|
|
PROC_VMSPACE_LOCK(p);
|
1997-04-13 01:48:35 +00:00
|
|
|
p->p_vmspace = newvmspace;
|
2006-05-29 21:28:56 +00:00
|
|
|
PROC_VMSPACE_UNLOCK(p);
|
2008-03-12 10:12:01 +00:00
|
|
|
if (p == curthread->td_proc)
|
2001-09-12 08:38:13 +00:00
|
|
|
pmap_activate(curthread);
|
2004-02-02 23:23:48 +00:00
|
|
|
vmspace_free(oldvmspace);
|
2007-11-05 11:36:16 +00:00
|
|
|
return (0);
|
1997-04-13 01:48:35 +00:00
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_lookup:
|
|
|
|
*
|
|
|
|
* Finds the VM object, offset, and
|
|
|
|
* protection for a given virtual address in the
|
|
|
|
* specified map, assuming a page fault of the
|
|
|
|
* type specified.
|
|
|
|
*
|
|
|
|
* Leaves the map in question locked for read; return
|
|
|
|
* values are guaranteed until a vm_map_lookup_done
|
|
|
|
* call is performed. Note that the map argument
|
|
|
|
* is in/out; the returned map must be used in
|
|
|
|
* the call to vm_map_lookup_done.
|
|
|
|
*
|
|
|
|
* A handle (out_entry) is returned for use in
|
|
|
|
* vm_map_lookup_done, to make that fast.
|
|
|
|
*
|
|
|
|
* If a lookup is requested with "write protection"
|
|
|
|
* specified, the map may be changed to perform virtual
|
|
|
|
* copying operations, although the data referenced will
|
|
|
|
* remain the same.
|
|
|
|
*/
|
|
|
|
int
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_lookup(vm_map_t *var_map, /* IN/OUT */
|
|
|
|
vm_offset_t vaddr,
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_prot_t fault_typea,
|
1997-08-25 22:15:31 +00:00
|
|
|
vm_map_entry_t *out_entry, /* OUT */
|
|
|
|
vm_object_t *object, /* OUT */
|
|
|
|
vm_pindex_t *pindex, /* OUT */
|
|
|
|
vm_prot_t *out_prot, /* OUT */
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
boolean_t *wired) /* OUT */
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
|
|
|
vm_map_t map = *var_map;
|
|
|
|
vm_prot_t prot;
|
1998-01-17 09:17:02 +00:00
|
|
|
vm_prot_t fault_type = fault_typea;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_object_t eobject;
|
2011-02-04 21:49:24 +00:00
|
|
|
vm_size_t size;
|
2010-12-02 17:37:16 +00:00
|
|
|
struct ucred *cred;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
RetryLookup:;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
vm_map_lock_read(map);
|
|
|
|
|
|
|
|
/*
|
2008-12-30 19:48:03 +00:00
|
|
|
* Lookup the faulting address.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2008-12-30 20:51:07 +00:00
|
|
|
if (!vm_map_lookup_entry(map, vaddr, out_entry)) {
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2008-12-30 19:48:03 +00:00
|
|
|
entry = *out_entry;
|
2003-11-03 16:14:45 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Handle submaps.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
vm_map_t old_map = map;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
*var_map = map = entry->object.sub_map;
|
|
|
|
vm_map_unlock_read(old_map);
|
|
|
|
goto RetryLookup;
|
|
|
|
}
|
1997-04-06 02:29:45 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Check whether this task is allowed to have this page.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2009-11-26 05:16:07 +00:00
|
|
|
prot = entry->protection;
|
1998-01-17 09:17:02 +00:00
|
|
|
fault_type &= (VM_PROT_READ|VM_PROT_WRITE|VM_PROT_EXECUTE);
|
2009-11-18 18:05:54 +00:00
|
|
|
if ((fault_type & prot) != fault_type || prot == VM_PROT_NONE) {
|
2008-12-30 20:51:07 +00:00
|
|
|
vm_map_unlock_read(map);
|
|
|
|
return (KERN_PROTECTION_FAILURE);
|
1998-01-17 09:17:02 +00:00
|
|
|
}
|
1999-11-23 06:51:28 +00:00
|
|
|
if ((entry->eflags & MAP_ENTRY_USER_WIRED) &&
|
|
|
|
(entry->eflags & MAP_ENTRY_COW) &&
|
2009-11-26 05:16:07 +00:00
|
|
|
(fault_type & VM_PROT_WRITE)) {
|
2008-12-30 20:51:07 +00:00
|
|
|
vm_map_unlock_read(map);
|
|
|
|
return (KERN_PROTECTION_FAILURE);
|
1997-04-06 02:29:45 +00:00
|
|
|
}
|
2013-06-18 07:02:35 +00:00
|
|
|
if ((fault_typea & VM_PROT_COPY) != 0 &&
|
|
|
|
(entry->max_protection & VM_PROT_WRITE) == 0 &&
|
|
|
|
(entry->eflags & MAP_ENTRY_COW) == 0) {
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
return (KERN_PROTECTION_FAILURE);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If this page is not pageable, we have to get it for all possible
|
|
|
|
* accesses.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1994-10-09 01:52:19 +00:00
|
|
|
*wired = (entry->wired_count != 0);
|
|
|
|
if (*wired)
|
2009-11-26 05:16:07 +00:00
|
|
|
fault_type = entry->protection;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
size = entry->end - entry->start;
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If the entry was copy-on-write, we either ...
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_NEEDS_COPY) {
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
/*
|
|
|
|
* If we want to write the page, we may as well handle that
|
1999-03-27 23:46:04 +00:00
|
|
|
* now since we've got the map locked.
|
1995-05-30 08:16:23 +00:00
|
|
|
*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* If we don't need to write the page, we just demote the
|
|
|
|
* permissions allowed.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2009-11-26 05:16:07 +00:00
|
|
|
if ((fault_type & VM_PROT_WRITE) != 0 ||
|
|
|
|
(fault_typea & VM_PROT_COPY) != 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Make a new object, and place it in the object
|
|
|
|
* chain. Note that no new references have appeared
|
1999-03-27 23:46:04 +00:00
|
|
|
* -- one just moved from the map to the new
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* object.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2002-03-18 15:08:09 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1994-05-24 10:09:53 +00:00
|
|
|
goto RetryLookup;
|
2002-05-31 03:48:55 +00:00
|
|
|
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->cred == NULL) {
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
/*
|
|
|
|
* The debugger owner is charged for
|
|
|
|
* the memory.
|
|
|
|
*/
|
2010-12-02 17:37:16 +00:00
|
|
|
cred = curthread->td_ucred;
|
|
|
|
crhold(cred);
|
|
|
|
if (!swap_reserve_by_cred(size, cred)) {
|
|
|
|
crfree(cred);
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
vm_map_unlock(map);
|
|
|
|
return (KERN_RESOURCE_SHORTAGE);
|
|
|
|
}
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
2011-02-04 21:49:24 +00:00
|
|
|
vm_object_shadow(&entry->object.vm_object,
|
|
|
|
&entry->offset, size);
|
1997-01-16 04:16:22 +00:00
|
|
|
entry->eflags &= ~MAP_ENTRY_NEEDS_COPY;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
eobject = entry->object.vm_object;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (eobject->cred != NULL) {
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
/*
|
|
|
|
* The object was not shadowed.
|
|
|
|
*/
|
2010-12-02 17:37:16 +00:00
|
|
|
swap_release_by_cred(size, entry->cred);
|
|
|
|
crfree(entry->cred);
|
|
|
|
entry->cred = NULL;
|
|
|
|
} else if (entry->cred != NULL) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(eobject);
|
2010-12-02 17:37:16 +00:00
|
|
|
eobject->cred = entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
eobject->charge = size;
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(eobject);
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
2002-05-31 03:48:55 +00:00
|
|
|
|
1999-02-19 03:11:37 +00:00
|
|
|
vm_map_lock_downgrade(map);
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* We're attempting to read a copy-on-write page --
|
|
|
|
* don't allow writes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
prot &= ~VM_PROT_WRITE;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Create an object if necessary.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2001-02-04 06:19:28 +00:00
|
|
|
if (entry->object.vm_object == NULL &&
|
|
|
|
!map->system_map) {
|
2003-11-03 16:14:45 +00:00
|
|
|
if (vm_map_lock_upgrade(map))
|
1994-05-24 10:09:53 +00:00
|
|
|
goto RetryLookup;
|
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
1995-07-13 08:48:48 +00:00
|
|
|
entry->object.vm_object = vm_object_allocate(OBJT_DEFAULT,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
atop(size));
|
1994-05-24 10:09:53 +00:00
|
|
|
entry->offset = 0;
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->cred != NULL) {
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WLOCK(entry->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->object.vm_object->cred = entry->cred;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
entry->object.vm_object->charge = size;
|
2013-02-20 12:03:20 +00:00
|
|
|
VM_OBJECT_WUNLOCK(entry->object.vm_object);
|
2010-12-02 17:37:16 +00:00
|
|
|
entry->cred = NULL;
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
}
|
1999-02-19 03:11:37 +00:00
|
|
|
vm_map_lock_downgrade(map);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1996-06-16 20:37:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Return the object/offset from this entry. If the entry was
|
|
|
|
* copy-on-write or empty, it has been fixed up.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-02-19 03:11:37 +00:00
|
|
|
*pindex = OFF_TO_IDX((vaddr - entry->start) + entry->offset);
|
1994-05-24 10:09:53 +00:00
|
|
|
*object = entry->object.vm_object;
|
|
|
|
|
|
|
|
*out_prot = prot;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
return (KERN_SUCCESS);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2004-08-12 20:14:49 +00:00
|
|
|
/*
|
|
|
|
* vm_map_lookup_locked:
|
|
|
|
*
|
|
|
|
* Lookup the faulting address. A version of vm_map_lookup that returns
|
|
|
|
* KERN_FAILURE instead of blocking on map lock or memory allocation.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
vm_map_lookup_locked(vm_map_t *var_map, /* IN/OUT */
|
|
|
|
vm_offset_t vaddr,
|
|
|
|
vm_prot_t fault_typea,
|
|
|
|
vm_map_entry_t *out_entry, /* OUT */
|
|
|
|
vm_object_t *object, /* OUT */
|
|
|
|
vm_pindex_t *pindex, /* OUT */
|
|
|
|
vm_prot_t *out_prot, /* OUT */
|
|
|
|
boolean_t *wired) /* OUT */
|
|
|
|
{
|
|
|
|
vm_map_entry_t entry;
|
|
|
|
vm_map_t map = *var_map;
|
|
|
|
vm_prot_t prot;
|
|
|
|
vm_prot_t fault_type = fault_typea;
|
|
|
|
|
|
|
|
/*
|
2008-12-30 19:48:03 +00:00
|
|
|
* Lookup the faulting address.
|
2004-08-12 20:14:49 +00:00
|
|
|
*/
|
2008-12-30 19:48:03 +00:00
|
|
|
if (!vm_map_lookup_entry(map, vaddr, out_entry))
|
|
|
|
return (KERN_INVALID_ADDRESS);
|
2004-08-12 20:14:49 +00:00
|
|
|
|
2008-12-30 19:48:03 +00:00
|
|
|
entry = *out_entry;
|
2004-08-12 20:14:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Fail if the entry refers to a submap.
|
|
|
|
*/
|
|
|
|
if (entry->eflags & MAP_ENTRY_IS_SUB_MAP)
|
|
|
|
return (KERN_FAILURE);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check whether this task is allowed to have this page.
|
|
|
|
*/
|
2009-11-26 05:16:07 +00:00
|
|
|
prot = entry->protection;
|
2004-08-12 20:14:49 +00:00
|
|
|
fault_type &= VM_PROT_READ | VM_PROT_WRITE | VM_PROT_EXECUTE;
|
|
|
|
if ((fault_type & prot) != fault_type)
|
|
|
|
return (KERN_PROTECTION_FAILURE);
|
|
|
|
if ((entry->eflags & MAP_ENTRY_USER_WIRED) &&
|
|
|
|
(entry->eflags & MAP_ENTRY_COW) &&
|
2009-11-26 05:16:07 +00:00
|
|
|
(fault_type & VM_PROT_WRITE))
|
2004-08-12 20:14:49 +00:00
|
|
|
return (KERN_PROTECTION_FAILURE);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this page is not pageable, we have to get it for all possible
|
|
|
|
* accesses.
|
|
|
|
*/
|
|
|
|
*wired = (entry->wired_count != 0);
|
|
|
|
if (*wired)
|
2009-11-26 05:16:07 +00:00
|
|
|
fault_type = entry->protection;
|
2004-08-12 20:14:49 +00:00
|
|
|
|
|
|
|
if (entry->eflags & MAP_ENTRY_NEEDS_COPY) {
|
|
|
|
/*
|
|
|
|
* Fail if the entry was copy-on-write for a write fault.
|
|
|
|
*/
|
|
|
|
if (fault_type & VM_PROT_WRITE)
|
|
|
|
return (KERN_FAILURE);
|
|
|
|
/*
|
|
|
|
* We're attempting to read a copy-on-write page --
|
|
|
|
* don't allow writes.
|
|
|
|
*/
|
|
|
|
prot &= ~VM_PROT_WRITE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fail if an object should be created.
|
|
|
|
*/
|
|
|
|
if (entry->object.vm_object == NULL && !map->system_map)
|
|
|
|
return (KERN_FAILURE);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the object/offset from this entry. If the entry was
|
|
|
|
* copy-on-write or empty, it has been fixed up.
|
|
|
|
*/
|
|
|
|
*pindex = OFF_TO_IDX((vaddr - entry->start) + entry->offset);
|
|
|
|
*object = entry->object.vm_object;
|
|
|
|
|
|
|
|
*out_prot = prot;
|
|
|
|
return (KERN_SUCCESS);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* vm_map_lookup_done:
|
|
|
|
*
|
|
|
|
* Releases locks acquired by a vm_map_lookup
|
|
|
|
* (according to the handle returned by that lookup).
|
|
|
|
*/
|
1995-05-30 08:16:23 +00:00
|
|
|
void
|
2001-07-04 20:15:18 +00:00
|
|
|
vm_map_lookup_done(vm_map_t map, vm_map_entry_t entry)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
/*
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
* Unlock the main-level map
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
vm_map_unlock_read(map);
|
|
|
|
}
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
#include "opt_ddb.h"
|
1995-04-16 12:56:22 +00:00
|
|
|
#ifdef DDB
|
1996-09-14 11:54:59 +00:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
|
|
|
|
#include <ddb/ddb.h>
|
|
|
|
|
2012-11-12 00:30:40 +00:00
|
|
|
static void
|
|
|
|
vm_map_print(vm_map_t map)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1998-04-29 04:28:22 +00:00
|
|
|
vm_map_entry_t entry;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-03-02 05:43:18 +00:00
|
|
|
db_iprintf("Task map %p: pmap=%p, nentries=%d, version=%u\n",
|
|
|
|
(void *)map,
|
1998-07-14 12:14:58 +00:00
|
|
|
(void *)map->pmap, map->nentries, map->timestamp);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
for (entry = map->header.next; entry != &map->header;
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry = entry->next) {
|
1998-07-11 11:30:46 +00:00
|
|
|
db_iprintf("map entry %p: start=%p, end=%p\n",
|
|
|
|
(void *)entry, (void *)entry->start, (void *)entry->end);
|
1999-03-02 05:43:18 +00:00
|
|
|
{
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
static char *inheritance_name[4] =
|
|
|
|
{"share", "copy", "none", "donate_copy"};
|
|
|
|
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
db_iprintf(" prot=%x/%x/%s",
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->protection,
|
|
|
|
entry->max_protection,
|
1999-01-28 00:57:57 +00:00
|
|
|
inheritance_name[(int)(unsigned char)entry->inheritance]);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (entry->wired_count != 0)
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
db_printf(", wired");
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-02-07 21:48:23 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) {
|
2002-11-07 22:49:07 +00:00
|
|
|
db_printf(", share=%p, offset=0x%jx\n",
|
1999-02-07 21:48:23 +00:00
|
|
|
(void *)entry->object.sub_map,
|
2002-11-07 22:49:07 +00:00
|
|
|
(uintmax_t)entry->offset);
|
1994-05-24 10:09:53 +00:00
|
|
|
if ((entry->prev == &map->header) ||
|
1999-02-07 21:48:23 +00:00
|
|
|
(entry->prev->object.sub_map !=
|
|
|
|
entry->object.sub_map)) {
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
2012-11-12 00:30:40 +00:00
|
|
|
vm_map_print((vm_map_t)entry->object.sub_map);
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent -= 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
} else {
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->cred != NULL)
|
|
|
|
db_printf(", ruid %d", entry->cred->cr_ruid);
|
2002-11-07 22:49:07 +00:00
|
|
|
db_printf(", object=%p, offset=0x%jx",
|
1998-07-14 12:14:58 +00:00
|
|
|
(void *)entry->object.vm_object,
|
2002-11-07 22:49:07 +00:00
|
|
|
(uintmax_t)entry->offset);
|
2010-12-02 17:37:16 +00:00
|
|
|
if (entry->object.vm_object && entry->object.vm_object->cred)
|
|
|
|
db_printf(", obj ruid %d charge %jx",
|
|
|
|
entry->object.vm_object->cred->cr_ruid,
|
Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.
The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.
The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.
The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).
Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.
In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)
2009-06-23 20:45:22 +00:00
|
|
|
(uintmax_t)entry->object.vm_object->charge);
|
1997-01-16 04:16:22 +00:00
|
|
|
if (entry->eflags & MAP_ENTRY_COW)
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf(", copy (%s)",
|
1997-01-16 04:16:22 +00:00
|
|
|
(entry->eflags & MAP_ENTRY_NEEDS_COPY) ? "needed" : "done");
|
1996-09-14 11:54:59 +00:00
|
|
|
db_printf("\n");
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if ((entry->prev == &map->header) ||
|
|
|
|
(entry->prev->object.vm_object !=
|
These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.
The majority of the merged VM/cache work is by John Dyson.
The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.
vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme. The scheme is almost fully compatible with the old filesystem
interface. Significant improvement in the number of opportunities for write
clustering.
vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache. Fixup of vfs_cluster to eliminate the bogus pagemove stuff.
vm_object.c:
Yet more improvements in the collapse code. Elimination of some windows that
can cause list corruption.
vm_pageout.c:
Fixed it, it really works better now. Somehow in 2.0, some "enhancements"
broke the code. This code has been reworked from the ground-up.
vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.
pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.
vm_glue.c
Much simpler and more effective swapping code. No more gratuitous swapping.
proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.
swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency. Now the
code doesn't need it anymore.
machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.
machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.
ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache. Add "bypass" support for sneaking in on
busy buffers.
Submitted by: John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
|
|
|
entry->object.vm_object)) {
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent += 2;
|
1998-07-14 12:14:58 +00:00
|
|
|
vm_object_print((db_expr_t)(intptr_t)
|
|
|
|
entry->object.vm_object,
|
2014-05-10 16:36:13 +00:00
|
|
|
0, 0, (char *)0);
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent -= 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
1996-09-14 11:54:59 +00:00
|
|
|
db_indent -= 2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
2012-11-12 00:30:40 +00:00
|
|
|
DB_SHOW_COMMAND(map, map)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (!have_addr) {
|
|
|
|
db_printf("usage: show map <addr>\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
vm_map_print((vm_map_t)addr);
|
|
|
|
}
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
|
|
|
DB_SHOW_COMMAND(procvm, procvm)
|
|
|
|
{
|
|
|
|
struct proc *p;
|
|
|
|
|
|
|
|
if (have_addr) {
|
|
|
|
p = (struct proc *) addr;
|
|
|
|
} else {
|
|
|
|
p = curproc;
|
|
|
|
}
|
|
|
|
|
1998-07-11 07:46:16 +00:00
|
|
|
db_printf("p = %p, vmspace = %p, map = %p, pmap = %p\n",
|
|
|
|
(void *)p, (void *)p->p_vmspace, (void *)&p->p_vmspace->vm_map,
|
1999-02-19 14:25:37 +00:00
|
|
|
(void *)vmspace_pmap(p->p_vmspace));
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
2012-11-12 00:30:40 +00:00
|
|
|
vm_map_print((vm_map_t)&p->p_vmspace->vm_map);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
}
|
|
|
|
|
1996-09-14 11:54:59 +00:00
|
|
|
#endif /* DDB */
|