Commit Graph

734 Commits

Author SHA1 Message Date
Peter Wemm
db42d90829 unifdef -DVM_STACK - it's been on for a while for x86 and was checked
and appeared to be working for the Alpha some time ago.
1999-04-19 14:14:14 +00:00
Peter Wemm
b8df55a044 Move the declaration of faultin() from the vm headers to proc.h, since
it is now referenced from a macro there (PHOLD()).
1999-04-13 19:17:15 +00:00
Eivind Eklund
0776e10c71 Staticize 1999-04-11 02:16:27 +00:00
Dmitrij Tejblum
897a45eff9 Convert usage of vm_page_bits() to the new convention ("Inputs are required
to range within a page").
1999-04-10 20:52:11 +00:00
Eivind Eklund
c523e8b21d Lock vnode correctly for VOP_OPEN.
Discussed with:	alc, dillon
1999-04-10 17:54:43 +00:00
Peter Wemm
c8da68e917 Don't forcibly kill processes that are locked in-core via PHOLD - it was
just checking P_NOSWAP before.
1999-04-06 03:14:56 +00:00
Peter Wemm
637cae1dd4 Only use p->p_lock (manage by PHOLD()/PRELE()) - P_NOSWAP/P_PHYSIO is no
longer set.
1999-04-06 03:11:34 +00:00
Julian Elischer
8d17e69460 Catch a case spotted by Tor where files mmapped could leave garbage in the
unallocated parts of the last page when the file ended on a frag
but not a page boundary.
Delimitted by tags PRE_MATT_MMAP_EOF and POST_MATT_MMAP_EOF,
in files alpha/alpha/pmap.c i386/i386/pmap.c nfs/nfs_bio.c vm/pmap.h
    vm/vm_page.c vm/vm_page.h vm/vnode_pager.c miscfs/specfs/spec_vnops.c
    ufs/ufs/ufs_readwrite.c kern/vfs_bio.c

Submitted by: Matt Dillon <dillon@freebsd.org>
Reviewed by: Alan Cox <alc@freebsd.org>
1999-04-05 19:38:30 +00:00
Alan Cox
876318eca0 Two changes to vm_map_delete:
1. Don't bother checking object->ref_count == 1 in order to set
OBJ_ONEMAPPING.  It's a waste of time.  If object->ref_count == 1,
vm_map_entry_delete will "run-down" the object and its pages.

2. If object->ref_count == 1, ignore OBJ_ONEMAPPING.  Wait for
vm_map_entry_delete to "run-down" the object and its pages.
Otherwise, we're calling two different procedures to delete
the object's pages.

Note: "vmstat -s" will once again show a non-zero value
for "pages freed by exiting processes".
1999-04-04 07:11:02 +00:00
Alan Cox
ad5fca3b4a Mainly, eliminate the comments about share maps. (We don't have share maps
any more.)  Also, eliminate an incorrect comment that says that we don't
coalesce vm_map_entry's.  (We do.)
1999-03-27 23:46:04 +00:00
Eivind Eklund
4491ea9111 Correct a comment. 1999-03-27 02:39:01 +00:00
Alan Cox
99c81ca94d Two changes:
Remove more (redundant) map timestamp increments from properly
synchronized routines.  (Changed: vm_map_entry_link, vm_map_entry_unlink,
and vm_map_pageable.)

Micro-optimize vm_map_entry_link and vm_map_entry_unlink, eliminating
unnecessary dereferences.  At the same time, converted them from macros
to inline functions.
1999-03-21 23:37:00 +00:00
Alan Cox
61fc5ee627 Construct the free queue(s) in descending order (by physical
address) so that the first 16MB of physical memory is allocated
last rather than first.  On large-memory machines, this avoids
the exhaustion of low physical memory before isa_dmainit has run.
1999-03-19 05:21:03 +00:00
Alan Cox
c7003c6991 Correct a problem in kmem_malloc: A kmem_malloc allowing "wait" may
block (VM_WAIT) holding the map lock.  This is bad.  For example, a subsequent
kmem_malloc by an interrupt handler on the same map may find the lock held
and panic in the lockmgr.
1999-03-16 07:39:07 +00:00
Alan Cox
44428f621d Two changes:
In general, vm_map_simplify_entry should be performed INSIDE
the loop that traverses the map, not outside.  (Changed:
vm_map_inherit, vm_map_pageable.)

vm_fault_unwire doesn't acquire the map lock (or block holding
it).  Thus, vm_map_set/clear_recursive shouldn't be called.
(Changed: vm_map_user_pageable, vm_map_pageable.)
1999-03-15 06:24:52 +00:00
Julian Elischer
811c2e1a76 Fix breakage in last commit
Submitted by: Brian Feldman <green@unixhelp.org>
1999-03-15 05:09:48 +00:00
Julian Elischer
0237469f43 A bit of a hack, but allows the vn device to be a module again.
Submitted by: Matt Dillon <dillon@freebsd.org>
1999-03-14 20:40:15 +00:00
Julian Elischer
a5296b05b4 Submitted by: Matt Dillon <dillon@freebsd.org>
The old VN device broke in -4.x when the definition of B_PAGING
changed. This patch fixes this plus implements additional capabilities.
The new VN device can be backed by a file ( as per normal ), or it can
be directly backed by swap.

Due to dependencies in VM include files  (on opt_xxx options) the new
vn device cannot be a module yet. This will be fixed in a later commit.
This commit delimitted by tags {PRE,POST}_MATT_VNDEV
1999-03-14 09:20:01 +00:00
Alan Cox
a1a54e9fc1 Correct two optimization errors in vm_object_page_remove:
1. The size of vm_object::memq is vm_object::resident_page_count,
not vm_object::size.

2. The "size > 4" test sometimes results in the traversal of a ~1000 page
memq in order to locate ~10 pages.
1999-03-14 06:36:00 +00:00
Alan Cox
b73d0eb905 Remove vm_page_frees from kmem_malloc that are performed
by vm_map_delete/vm_object_page_remove anyway.
1999-03-12 08:05:49 +00:00
Julian Elischer
51df594922 Stop the mfs from trying to swap out crucial bits of the mfs
as this can lead to deadlock.
Submitted by: Mat dillon <dillon@freebsd.org>
1999-03-12 00:44:03 +00:00
Alan Cox
00d4f4a5f4 Remove (redundant) map timestamp increments from some properly
synchronized routines.
1999-03-09 08:00:17 +00:00
Alan Cox
da3a3026b9 Remove an unused variable from vmspace_fork. 1999-03-08 03:53:07 +00:00
Alan Cox
9de3dd734e Change vm_map_growstack to acquire and hold a read lock (instead of a write
lock) until it actually needs to modify the vm_map.

Note: it is legal to modify vm_map::hint without holding a write lock.

Submitted by:	"Richard Seaman, Jr." <dick@tar.com> with minor changes
		by myself.
1999-03-07 21:25:42 +00:00
Alan Cox
f59e8eb9b1 Upgrading a map's lock to exclusive status should increment
the map's timestamp.  In general, whenever an exclusive lock is
acquired the timestamp should be incremented.
1999-03-06 07:11:33 +00:00
Alan Cox
dd2622a8cd To avoid a conflict for the vm_map's lock with vm_fault, release
the read lock around the subyte operations in mincore.  After the lock is
reacquired, use the map's timestamp to determine if we need to restart
the scan.
1999-03-02 22:55:02 +00:00
Alan Cox
e5f251d2d3 Remove the last of the share map code: struct vm_map::is_main_map.
Reviewed by:	Matthew Dillon <dillon@apollo.backplane.com>
1999-03-02 05:43:18 +00:00
Alan Cox
eff50fcd4c mincore doesn't modify the vm_map. Therefore, it doesn't require
an exclusive lock.  A read lock will suffice.
1999-03-01 20:42:16 +00:00
Alan Cox
0e3cdf2cf8 Reviewed by: "John S. Dyson" <dyson@iquest.net>
Submitted by:	Matthew Dillon <dillon@apollo.backplane.com>
To prevent a deadlock, if we are extremely low on memory, force synchronous
operation by the VOP_PUTPAGES in vnode_pager_putpages.
1999-02-27 23:39:28 +00:00
Alan Cox
14286e5e8f Reviewed by: Matthew Dillon <dillon@apollo.backplane.com>
Corrected the computation of cnt.v_ozfod in vm_fault: vm_fault
was counting the number of unoptimized rather than optimized
zero-fill faults.
1999-02-25 06:00:52 +00:00
Matthew Dillon
82e5072fcd Comment swstrategy() routine. 1999-02-25 05:37:18 +00:00
Matthew Dillon
d1bf5d56b6 Remove unnecessary page protects on map_split and collapse operations.
Fix bug where an object's OBJ_WRITEABLE/OBJ_MIGHTBEDIRTY flags do
    not get set under certain circumstances ( page rename case ).

Reviewed by:	Alan Cox <alc@cs.rice.edu>, John Dyson
1999-02-24 21:26:26 +00:00
Matthew Dillon
c4812f564a Removed ENOMEM error on swap_pager_full condition which ignored the
availability of physical memory.  As per original bug report by
    Bruce.

Reviewed by:	Alan Cox <alc@cs.rice.edu>
1999-02-22 08:42:16 +00:00
Matthew Dillon
ad3cce2041 Remove conditional sysctl's
Leave swap_async_max sysctl intact, remove swap_cluster_max sysctl.

Reviewed by:	     Alan Cox <alc@cs.rice.edu>
1999-02-21 08:34:15 +00:00
Matthew Dillon
20d3034f39 Reviewed by: Alan Cox <alc@cs.rice.edu>
Fix problem w/ low-swap/low-memory handling as reported by Bruce Evans.
1999-02-21 08:30:49 +00:00
Luoqi Chen
fe2144fd5a Eliminate a possible numerical overflow. 1999-02-19 19:14:48 +00:00
Luoqi Chen
b1028ad122 Hide access to vmspace:vm_pmap with inline function vmspace_pmap(). This
is the preparation step for moving pmap storage out of vmspace proper.

Reviewed by:	Alan Cox	<alc@cs.rice.edu>
		Matthew Dillion	<dillon@apollo.backplane.com>
1999-02-19 14:25:37 +00:00
Matthew Dillon
9b09b6c73f Submitted by: Alan Cox <alc@cs.rice.edu>
Remove remaining share map garbage from vm_map_lookup() and clean out
    old #if 0 stuff.
1999-02-19 03:11:37 +00:00
Matthew Dillon
327f4e8394 Limit number of simultanious asynchronous swap pager I/Os that can
be in progress at any given moment.

    Add two swap tuneables to sysctl:

	vm.swap_async_max: 4
	vm.swap_cluster_max: 16

    Recommended values are a cluster size of 8 or 16 pages.  async_max is
    about right for 1-4 swap devices.  Reduce to 2 if swap is eating too much
    bandwidth, or even 1 if swap is both eating too much bandwidth and sitting
    on a slow network (10BaseT).

    The defaults work well across a broad range of configurations and should
    normally be left alone.
1999-02-18 19:57:33 +00:00
Matthew Dillon
b33fb764f1 Submitted by: Luoqi Chen <luoqi@watermarkgroup.com>
Unlock vnode before messing with map to avoid deadlock between map and
    vnode ( e.g. with exec_map and underlying program binary vnode ).  Solves
    a deadlock that most often occurs during a large -j# buildworld reported
    by three people.
1999-02-17 09:08:29 +00:00
Matthew Dillon
efcae3d355 Minor reorganization of vm_page_alloc(). No functional changes have
been made but the code has been reorganized and documented to make
    it more readable, reduce the size of the code, and optimize the branch
    path caching capabilities that most modern processors have.
1999-02-15 06:52:14 +00:00
Matthew Dillon
1ce137be82 Fix a bug in the new madvise() code that would possibly (improperly)
free swap space out from under a busy page.  This is not legal because
    the swap may be reallocated and I/O issued while I/O is still in
    progress on the same swap page from the madvise()'d object.  This bug
    could only occur under extreme paging conditions but might not cause
    an error until much later.  As a side-benefit, madvise() is now even
    smaller.
1999-02-15 02:03:40 +00:00
Matthew Dillon
41c67e12bd Minor optimization to madvise() MADV_FREE to make page as freeable as
possible without actually unmapping it from the process.

    As of now, I declare madvise() on OBJT_DEFAULT/OBJT_SWAP objects to be
    'working and complete'.
1999-02-12 20:42:19 +00:00
Matthew Dillon
2aaeadf8d9 Fix non-fatal bug in vm_map_insert() which improperly cleared
OBJ_ONEMAPPING in the case where an object is extended by an
    additional vm_map_entry must be allocated.

    In vm_object_madvise(), remove calll to vm_page_cache() in MADV_FREE
    case in order to avoid a page fault on page reuse.  However, we still
    mark the page as clean and destroy any swap backing store.

Submitted by:	Alan Cox <alc@cs.rice.edu>
1999-02-12 09:51:43 +00:00
Matthew Dillon
b4f8f16e56 Addendum to vm_map coalesce optimization. Also, this was backed-out
because there was a concensus on current in regards to leaving bss r+w+x
    instead of r+w.  This is in order to maintain reasonable compatibility
    with existing JIT compilers (e.g. kaffe) and possibly other programs.
1999-02-09 01:39:29 +00:00
Matthew Dillon
2ad1a3f729 Revamp vm_object_[q]collapse(). Despite the complexity of this patch,
no major operational changes were made.  The three core object->memq loops
    were moved into a single inline procedure and various operational
    characteristics of the collapse function were documented.
1999-02-08 19:00:15 +00:00
Matthew Dillon
d031cff181 General cleanup. Remove #if 0's and remove useless register qualifiers. 1999-02-08 05:15:54 +00:00
Matthew Dillon
faa273d5c2 Rip out PQ_ZERO queue. PQ_ZERO functionality is now combined in with
PQ_FREE.  There is little operational difference other then the kernel
    being a few kilobytes smaller and the code being more readable.

    * vm_page_select_free() has been *greatly* simplified.
    * The PQ_ZERO page queue and supporting structures have been removed
    * vm_page_zero_idle() revamped (see below)

    PG_ZERO setting and clearing has been migrated from vm_page_alloc()
    to vm_page_free[_zero]() and will eventually be guarenteed to remain
    tracked throughout a page's life ( if it isn't already ).

    When a page is freed, PG_ZERO pages are appended to the appropriate
    tailq in the PQ_FREE queue while non-PG_ZERO pages are prepended.
    When locating a new free page, PG_ZERO selection operates from within
    vm_page_list_find() ( get page from end of queue instead of beginning
    of queue ) and then only occurs in the nominal critical path case.  If
    the nominal case misses, both normal and zero-page allocation devolves
    into the same _vm_page_list_find() select code without any specific
    zero-page optimizations.

    Additionally, vm_page_zero_idle() has been revamped.  Hysteresis has been
    added and zero-page tracking adjusted to conform with the other changes.
    Currently hysteresis is set at 1/3 (lo) and 1/2 (hi) the number of free
    pages.  We may wish to increase both parameters as time permits.  The
    hysteresis is designed to avoid silly zeroing in borderline allocation/free
    situations.
1999-02-08 00:37:36 +00:00
Matthew Dillon
5313b05fe0 Backed out vm_map coalesce optimization - it resulted in 22% more page
faults for reasons unknown ( under investigation ).
    /usr/bin/time -l make in /usr/src/bin went from 67000 faults to 90000
    faults.
1999-02-08 00:27:56 +00:00
Matthew Dillon
9fdfe602fc Remove MAP_ENTRY_IS_A_MAP 'share' maps. These maps were once used to
attempt to optimize forks but were essentially given-up on due to
    problems and replaced with an explicit dup of the vm_map_entry structure.
    Prior to the removal, they were entirely unused.
1999-02-07 21:48:23 +00:00