Commit Graph

1906 Commits

Author SHA1 Message Date
Alan Cox
84d98bf699 - Correct a long-standing race condition in vm_page_try_to_cache() that
could result in a panic "vm_page_cache: caching a dirty page, ...":
   Access to the page must be restricted or removed before calling
   vm_page_cache().  This race condition is identical in nature to that
   which was addressed by vm_pageout.c's revision 1.251.
 - Simplify the code surrounding the fix to this same race condition
   in vm_pageout.c's revision 1.251.  There should be no behavioral
   change.  Reviewed by: tegge

MFC after:	7 days
2004-02-14 08:54:37 +00:00
Poul-Henning Kamp
d2bae332d6 Remove the absolute count g_access_abs() function since experience has
shown that it is not useful.

Rename the relative count g_access_rel() function to g_access(), only
the name has changed.

Change all g_access_rel() calls in our CVS tree to call g_access() instead.

Add an #ifndef BURN_BRIDGES #define of g_access_rel() for source
code compatibility.
2004-02-12 22:42:11 +00:00
Alan Cox
40448065e8 Further reduce the use of Giant in vm_map_delete(): Perform pmap_remove()
on system maps, besides the kmem_map, without Giant.

In collaboration with:	tegge
2004-02-12 20:56:06 +00:00
Alan Cox
a3dfacb51c Correct a long-standing race condition in the inactive queue scan. (See
the added comment for low-level details.)  The effect of this race
condition is a panic "vm_page_cache: caching a dirty page, ..."

Reviewed by:	tegge
MFC after:	7 days
2004-02-10 18:34:27 +00:00
Alan Cox
c5aebf380c swp_pager_async_iodone() no longer requires Giant. Modify bufdone()
and swapgeom_done() to perform swp_pager_async_iodone() without Giant.

Reviewed by:	tegge
2004-02-07 08:54:50 +00:00
Alan Cox
bfee999d6a - Locking for the per-process resource limits structure has eliminated
the need for Giant in vm_map_growstack().
 - Use the proc * that is passed to vm_map_growstack() rather than
   curthread->td_proc.
2004-02-05 06:33:18 +00:00
John Baldwin
91d5354a2c Locking for the per-process resource limits structure.
- struct plimit includes a mutex to protect a reference count.  The plimit
  structure is treated similarly to struct ucred in that is is always copy
  on write, so having a reference to a structure is sufficient to read from
  it without needing a further lock.
- The proc lock protects the p_limit pointer and must be held while reading
  limits from a process to keep the limit structure from changing out from
  under you while reading from it.
- Various global limits that are ints are not protected by a lock since
  int writes are atomic on all the archs we support and thus a lock
  wouldn't buy us anything.
- All accesses to individual resource limits from a process are abstracted
  behind a simple lim_rlimit(), lim_max(), and lim_cur() API that return
  either an rlimit, or the current or max individual limit of the specified
  resource from a process.
- dosetrlimit() was renamed to kern_setrlimit() to match existing style of
  other similar syscall helper functions.
- The alpha OSF/1 compat layer no longer calls getrlimit() and setrlimit()
  (it didn't used the stackgap when it should have) but uses lim_rlimit()
  and kern_setrlimit() instead.
- The svr4 compat no longer uses the stackgap for resource limits calls,
  but uses lim_rlimit() and kern_setrlimit() instead.
- The ibcs2 compat no longer uses the stackgap for resource limits.  It
  also no longer uses the stackgap for accessing sysctl's for the
  ibcs2_sysconf() syscall but uses kernel_sysctl() instead.  As a result,
  ibcs2_sysconf() no longer needs Giant.
- The p_rlimit macro no longer exists.

Submitted by:	mtm (mostly, I only did a few cleanups and catchups)
Tested on:	i386
Compiled on:	alpha, amd64
2004-02-04 21:52:57 +00:00
John Baldwin
b56ef1c10d Drop the reference count on the old vmspace after fully switching the
current thread to the new vmspace.

Suggested by:	dillon
2004-02-02 23:23:48 +00:00
Poul-Henning Kamp
3e5b686160 Check error return from g_clone_bio(). (netchild@)
Add XXX comment about why this is still not optimal. (phk@)

Submitted by:	netchild@
2004-02-02 13:08:03 +00:00
Jeff Roberson
7b09539ce2 - Use a seperate startup function for the zeroidle kthread. Use this to
set P_NOLOAD prior to running the thread.
2004-02-02 07:51:03 +00:00
Jeff Roberson
aaa8bb1604 - Fix a problem where we did not drain the cache of buckets in the zone
when uma_reclaim() was called.  This was introduced when the zone
   working-set algorithm was removed in favor of using the per cpu caches
   as the working set.
2004-02-01 06:15:17 +00:00
Dag-Erling Smørgrav
e726bc0e6c Mechanical whitespace cleanup. 2004-01-30 16:26:29 +00:00
Bruce Evans
9a44a82b61 Fixed breakage of scheduling in rev.1.29 of subr_4bsd.c. The
"scheduler" here has very little to do with scheduling.  It is actually
the swapper, and it really must be the last SYSINIT'ed item like its
comment says, since proc0 metamorphoses into swapper by calling
scheduler() last in mi_start(), and scheduler() never returns..  Rev.1.29
of subr_4bsd.c broke this by adding another SI_ORDER_FIRST item
(kproc_start() for schedcpu_thread() onto the SI_SUB_RUN_SCHEDULER_LIST.
The sorting of SYSINITs with identical orders (at all levels) is
apparently nondeterministic, so this resulted in schedule() sometimes
being called second last and schedcpu_thread() not being called at all.

This quick fix just changes the code to almost match the comment
(SI_ORDER_FIRST -> SI_ORDER_ANY).  "LAST" is misspelled "ANY", and
there is no way to ensure that there is only 1 very lst SYSINIT.
A more complete fix would remove the SYSINIT obfuscation.
2004-01-29 12:35:11 +00:00
Jeff Roberson
29bcc4514f - Add a flags parameter to mi_switch. The value of flags may be SW_VOL or
SW_INVOL.  Assert that one of these is set in mi_switch() and propery
   adjust the rusage statistics.  This is to simplify the large number of
   users of this interface which were previously all required to adjust the
   proper counter prior to calling mi_switch().  This also facilitates more
   switch and locking optimizations.
 - Change all callers of mi_switch() to pass the appropriate paramter and
   remove direct references to the process statistics.
2004-01-25 03:54:52 +00:00
Alan Cox
7dea2c2e3b 1. Statically initialize swap_pager_full and swap_pager_almost_full to the
full state.  (When swap is added their state will change appropriately.)
2. Set swap_pager_full and swap_pager_almost_full to the full state when
   the last swap device is removed.
Combined these changes eliminate nonsense messages from the kernel on swap-
less machines.

Item 2 submitted by:	Divacky Roman <xdivac02@stud.fit.vutbr.cz>
Prodding by:		phk
2004-01-24 21:31:06 +00:00
Alan Cox
c19aa3402b Increase UMA_BOOT_PAGES because of changes to pv entry initialization in
revision 1.457 of i386/i386/pmap.c.
2004-01-18 05:51:06 +00:00
Alan Cox
23b186d324 Don't acquire Giant in vm_object_deallocate() unless the object is vnode-
backed.
2004-01-18 03:44:14 +00:00
Alan Cox
f4c2663897 Remove vm_page_alloc_contig(). It's now unused. 2004-01-14 06:21:38 +00:00
Alan Cox
0e88a71798 Remove long dead code, specifically, code related to munmapfd().
(See also vm/vm_mmap.c revision 1.173.)
2004-01-11 06:59:21 +00:00
Alan Cox
baadec0711 - Unmanage pages allocated by contigmalloc1(). (There is no point in
having PV entries for these pages.)
 - Remove splvm() and splx() calls.
2004-01-10 21:17:53 +00:00
Alan Cox
37d44833d5 Unmanage pages allocated by kmem_alloc(). (There is no point in having PV
entries for these pages.)
2004-01-10 00:22:33 +00:00
Alan Cox
65bae14d77 - Enable recursive acquisition of the mutex synchronizing access to the
free pages queue.  This is presently needed by contigmalloc1().
 - Move a sanity check against attempted double allocation of two pages
   to the same vm object offset from vm_page_alloc() to vm_page_insert().
   This provides better protection because double allocation could occur
   through a direct call to vm_page_insert(), such as that by
   vm_page_rename().
 - Modify contigmalloc1() to hold the mutex synchronizing access to the
   free pages queue while it scans vm_page_array in search of free pages.
 - Correct a potential leak of pages by contigmalloc1() that I introduced
   in revision 1.20: We must convert all cache queue pages to free pages
   before we begin removing free pages from the free queue.  Otherwise,
   if we have to restart the scan because we are unable to acquire the
   vm object lock that is necessary to convert a cache queue page to a
   free page, we leak those free pages already removed from the free queue.
2004-01-08 20:48:26 +00:00
Alan Cox
c020e821c7 Don't bother clearing PG_ZERO in contigmalloc1(), kmem_alloc(), or
kmem_malloc().  It serves no purpose.
2004-01-06 20:52:55 +00:00
Alan Cox
2f7af3db57 Simplify the various pager allocation routines by computing the desired
object size once and assigning that value to a local variable.
2004-01-04 20:55:15 +00:00
Alan Cox
a67048571f Eliminate the acquisition and release of Giant from vnode_pager_alloc().
The vm object and vnode locking should suffice.

Discussed with:	jeff
2004-01-04 03:18:24 +00:00
Alan Cox
e793b7797d Reduce the scope of Giant in swap_pager_alloc(). 2004-01-03 20:02:17 +00:00
Alan Cox
d0058957b5 Revision 1.74 of vm_meter.c ("Avoid lock-order reversal") makes the release
and subsequent reacquisition of the same vm object lock in
vm_object_collapse() unnecessary.
2004-01-02 19:57:45 +00:00
Alan Cox
e0ba75dd78 Avoid lock-order reversal between the vm object list mutex and the vm
object mutex.
2004-01-02 19:38:25 +00:00
Alan Cox
ff5dcf2546 - Increase the scope of the kmem_object's lock in kmem_malloc(). Add a
comment explaining why a further increase is not possible.
2004-01-01 19:48:56 +00:00
Alan Cox
4804edb44f In vm_page_lookup() check the root of the vm object's splay tree for the
desired page before calling vm_page_splay().
2003-12-31 19:02:01 +00:00
Alan Cox
bcdaad7fe7 Simplify vm_page_grab(): Don't bother with the generation check. If the
vm object hasn't changed, the desired page will be at or near the root
of the vm object's splay tree, making vm_page_lookup() cheap.  (The only
lock required for vm_page_lookup() is already held.)  If, however, the
vm object has changed and retry was requested, eliminating the generation
check also eliminates a pointless acquisition and release of the page
queues lock.
2003-12-31 01:44:45 +00:00
Alan Cox
4da9f125cc - Modify vm_object_split() to expect a locked vm object on entry and
return on a locked vm object on exit.  Remove GIANT_REQUIRED.
 - Eliminate some unnecessary local variables from vm_object_split().
2003-12-30 22:28:36 +00:00
Alan Cox
bd228075c7 Remove swap_pager_un_object_list; it is unused. 2003-12-29 04:21:44 +00:00
Alan Cox
53d0a98878 Remove GIANT_REQUIRED from kmem_suballoc(). 2003-12-28 00:10:48 +00:00
Alan Cox
a976eb5e46 - Reduce Giant's scope in vm_fault().
- Use vm_object_reference_locked() instead of vm_object_reference()
   in vm_fault().
2003-12-26 23:33:37 +00:00
Alan Cox
75898105c0 Minor correction to revision 1.258: Use the proc pointer that is passed to
vm_map_growstack() in the RLIMIT_VMEM check rather than curthread.
2003-12-26 21:54:45 +00:00
Alan Cox
9582cd94cb - Create an unmapped guard page to trap access to vm_page_array[-1].
This guard page would have trapped the problems with the MFC of the PAE
   support to RELENG_4 at an earlier point in the sequence of events.

Submitted by:	tegge
2003-12-22 02:04:08 +00:00
Alan Cox
925692caa5 - Significantly reduce the number of preallocated pv entries in
pmap_init().  Such a large preallocation is unnecessary and wastes
   nearly eight megabytes of kernel virtual address space per gigabyte
   of managed physical memory.
 - Increase UMA_BOOT_PAGES by two.  This enables the removal of
   pmap_pv_allocf().  (Note: this function was only used during
   initialization, specifically, after pmap_init() but before
   pmap_init2().  During pmap_init2(), a new allocator is installed.)
2003-12-22 01:01:32 +00:00
Alan Cox
cafe836a56 - Correct an error in mincore(2) that has existed since its introduction:
mincore(2) should check that the page is valid, not just allocated.
   Otherwise, it can return a false positive for a page that is not yet
   resident because it is being read from disk.
2003-12-21 06:03:40 +00:00
Alexander Kabaev
5e6dbda017 Remove trailing whitespace. 2003-12-08 02:45:45 +00:00
Alan Cox
c8123cb800 Addendum to revision 1.174: In the case where vm_pager_allocate() is called
to create a vnode-backed object, the vnode lock must be held by the caller.

Reported by:	truckman
Discussed with:	kan
2003-12-08 00:47:33 +00:00
Alan Cox
20eec4bbdb Fix a deadlock between vm_fault() and vm_mmap(): The expected lock ordering
between vm_map and vnode locks is that vm_map locks are acquired first.  In
revision 1.150 mmap(2) was changed to pass a locked vnode into vm_mmap().
This creates a lock-order reversal when vm_mmap() calls one of the vm_map
routines that acquires a vm_map lock.  The solution implemented herein is
to release the vnode lock in mmap() before calling vm_mmap() and reacquire
this lock if necessary in vm_mmap().

Approved by:	re (scottl)
Reviewed by:	jeff, kan, rwatson
2003-12-06 05:45:32 +00:00
John Baldwin
b6c71225a9 Fix all users of mp_maxid to use the same semantics, namely:
1) mp_maxid is a valid FreeBSD CPU ID in the range 0 .. MAXCPU - 1.
2) For all active CPUs in the system, PCPU_GET(cpuid) <= mp_maxid.

Approved by:	re (scottl)
Tested on:	i386, amd64, alpha
2003-12-03 14:57:26 +00:00
Jeff Roberson
e30b97c5f9 - Unbreak UP. mp_maxid is not defined on uni-processor machines, although
I believe it and the other MP variables should be.  For now, just define it
   here and wait for jhb to clean it up later.

Approved by:	re (rwatson)
2003-11-30 22:18:14 +00:00
Jeff Roberson
504d5de3a8 - Replace the local maxcpu with mp_maxid. Previously, if mp_maxid
was equal to MAXCPU, we would overrun the pcpu_mtx array because maxcpu
   was calculated incorrectly.
 - Add some more debugging code so that memory leaks at the time of
   uma_zdestroy() are more easily diagnosed.

Approved by:	re (rwatson)
2003-11-30 08:04:01 +00:00
Alan Cox
1cd5fbd854 - Avoid a lock-order reversal between Giant and a system map mutex that
occurs when kmem_malloc() fails to allocate a sufficient number of vm
   pages.  Specifically, we avoid the lock-order reversal by not grabbing
   Giant around pmap_remove() if the map is the kmem_map.

Approved by:	re (jhb)
Reported by:	Eugene <eugene3@web.de>
2003-11-19 18:48:45 +00:00
Tim J. Robbins
167a9effa5 In vnode_pager_input_smlfs(), call VOP_STRATEGY instead of VOP_SPECSTRATEGY
on non-VCHR vnodes. This fixes a panic when reading data from files on a
filesystem with a small (less than a page) block size.

PR:		59271
Reviewed by:	alc
2003-11-15 09:54:11 +00:00
Alan Cox
d1f42ac2ee - Remove use of Giant from uma_zone_set_obj(). 2003-11-14 17:49:07 +00:00
Alan Cox
6f8b4fc03a - Remove long dead code. 2003-11-14 08:22:38 +00:00
Alan Cox
b7b7cd4421 Changes to msync(2)
- Return EBUSY if the region was wired by mlock(2) and MS_INVALIDATE
   is specified to msync(2).  This is required by the Open Group Base
   Specifications Issue 6.
 - vm_map_sync() doesn't return KERN_FAILURE.  Thus, msync(2) can't
   possibly return EIO.
 - The second major loop in vm_map_sync() handles sub maps.  Thus,
   failing on sub maps in the first major loop isn't necessary.
2003-11-14 06:55:11 +00:00
Alan Cox
d88346020b - The Open Group Base Specifications Issue 6 specifies that an munmap(2)
must return EINVAL if size is zero.  Submitted by: tegge
 - In order to avoid a race condition in multithreaded applications, the
   check and removal operations by munmap(2) must be in the same critical
   section.  To accomodate this, vm_map_check_protection() is modified to
   require its caller to obtain at least a read lock on the map.
2003-11-10 01:37:40 +00:00
Jonathan Mini
8f101a2f31 NFC: Update stale comments.
Reviewed by:	alc
2003-11-10 00:44:00 +00:00
Alan Cox
637315ed9c - Remove Giant from msync(2). Giant is still acquired by the lower layers
if we drop into the pmap or vnode layers.
 - Migrate the handling of zero-length msync(2)s into vm_map_sync() so that
   multithread applications can't change the map between implementing the
   zero-length hack in msync(2) and reacquiring the map lock in
   vm_map_sync().

Reviewed by:	tegge
2003-11-09 22:09:04 +00:00
Alan Cox
950f8459d4 - Rename vm_map_clean() to vm_map_sync(). This better reflects the fact
that msync(2) is its only caller.
 - Migrate the parts of the old vm_map_clean() that examined the internals
   of a vm object to a new function vm_object_sync() that is implemented in
   vm_object.c.  At the same, introduce the necessary vm object locking so
   that vm_map_sync() and vm_object_sync() can be called without Giant.

Reviewed by:	tegge
2003-11-09 05:25:35 +00:00
Alan Cox
32a89c324e - Move the implementation of OBJ_ONEMAPPING from vm_map_delete() to
vm_map_entry_delete() so that all of the vm object manipulation is
   performed in one place.
2003-11-05 05:48:22 +00:00
Marcel Moolenaar
199c91ab79 Update avail_ssize for rstacks after growing them. 2003-11-04 06:48:58 +00:00
Dag-Erling Smørgrav
a86fa82659 Whitespace cleanup. 2003-11-03 16:14:45 +00:00
Alan Cox
a89c6258bb - Increase the scope of the source object lock in vm_map_copy_entry(). 2003-11-03 00:59:54 +00:00
Alan Cox
63f6cefcd5 - Increase the scope of two vm object locks in vm_object_split(). 2003-11-02 22:52:42 +00:00
Alan Cox
b921a12b3b - Introduce and use vm_object_reference_locked(). Unlike
vm_object_reference(), this function must not be used to reanimate dead
   vm objects.  This restriction simplifies locking.

Reviewed by:	tegge
2003-11-02 21:30:10 +00:00
Alan Cox
22ec553f77 - Increase the scope of two vm object locks in vm_object_collapse().
- Remove the acquisition and release of Giant from vm_object_coalesce().
2003-11-01 23:06:41 +00:00
Alan Cox
c7c8dd7e80 - Modify swap_pager_copy() and its callers such that the source and
destination objects are locked on entry and exit.  Add comments to
   the callers noting that the locks can be released by swap_pager_copy().
 - Remove several instances of GIANT_REQUIRED.
2003-11-01 08:57:26 +00:00
Alan Cox
de33beddd5 - Additional vm object locking in vm_object_split()
- New vm object locking assertions in vm_page_insert() and
   vm_object_set_writeable_dirty()
2003-11-01 04:54:23 +00:00
Alan Cox
3b9a4cb6a9 - Revert a part of revision 1.73: Make vm_object_set_flag() an inline
function.  This function is so trivial that inlining reduces the size
   of the kernel.
2003-10-31 20:17:00 +00:00
Alan Cox
dc6279b887 - Take advantage of the swap pager locking: Eliminate the use of Giant
from vm_object_madvise().
 - Remove excessive blank lines from vm_object_madvise().
2003-10-31 18:32:03 +00:00
Marcel Moolenaar
08667f6dc1 Fix two bugs introduced with the rstack functionality and specific to
the rstack functionality:
1. Fix a KASSERT that tests for the address to be above the upward
   growable stack. Typically for rstack, the faulting address can be
   identical to the record end of the upward growable entry, and
   very likely is on ia64. The KASSERT tested for greater than, not
   greater equal, so whenever the register stack had to be grown
   the assertion fired.
2. When we grow the upward growable stack entry and adjust the
   unlying object, don't forget to adjust the size of the VM map.
   Not doing so would trigger an assert in vm_mapzdtor().

Pointy hat: marcel (for not testing with INVARIANTS).
2003-10-31 07:29:28 +00:00
Alan Cox
2928cef7e1 - Synchronize access to the swdevt's sw_flags with sw_dev_mtx.
- Remove several instances of GIANT_REQUIRED.
2003-10-31 05:18:45 +00:00
Alan Cox
7645e88596 - Synchronize access to the swdevt's sw_blist with sw_dev_mtx.
- Remove several instances of GIANT_REQUIRED.
2003-10-30 09:12:43 +00:00
Alan Cox
d05bc12976 - Synchronize access to swdevhd using sw_dev_mtx.
- Use swp_sizecheck() rather than assignment to swap_pager_full in
   swaponsomething().
2003-10-30 07:11:06 +00:00
Alan Cox
0676a140b2 - Synchronize updates to nswapdev using sw_dev_mtx. 2003-10-29 07:51:41 +00:00
Alan Cox
2d9974c1e8 - Avoid a race in swaponsomething(): Calculate the new swdevt's first and
end swblk and insert this new swdevt into the list of swap devices
   in the same critical section.
2003-10-29 05:42:28 +00:00
Alan Cox
d536c58f53 - Complete the synchronization of accesses to the swblock hash table. 2003-10-27 05:58:15 +00:00
Alan Cox
7827d9b0fe - Introduce and use a mutex synchronizing access to the swblock hash table. 2003-10-26 19:55:35 +00:00
Alan Cox
43186e53ae - Simplify vm_object_collapse()'s collapse case, reducing the number
of lock acquires and releases performed.
 - Move an assertion from vm_object_collapse() to vm_object_zdtor()
   because it applies to all cases of object destruction.
2003-10-26 06:29:26 +00:00
Alan Cox
ee3dc7d7fe - Add some of the required vm object locking, including assertions where
the vm object lock is required and already held.
2003-10-25 23:42:17 +00:00
Alan Cox
93dbd07122 - Align a comment within struct vm_page.
- Annotate the vm_page's valid field as synchronized by the containing
   vm object's lock.
2003-10-25 18:33:04 +00:00
Alan Cox
52051abcf1 - Call vnode_pager_input_old() with the vm object locked. 2003-10-25 05:21:16 +00:00
Alan Cox
2e3b314d3a - Push down Giant from vm_pageout() to vm_pageout_scan(), freeing
vm_pageout_page_stats() from Giant.
 - Modify vm_pager_put_pages() and vm_pager_page_unswapped() to expect the
   vm object to be locked on entry.  (All of the pager routines now expect
   this.)
2003-10-24 06:43:04 +00:00
Alan Cox
ab42316c2f - Retire vm_pageout_page_free(). Instead, use vm_page_select_cache() from
vm_pageout_scan().  Rationale: I don't like leaving a busy page in the
   cache queue with neither the vm object nor the vm page queues lock held.
 - Assert that the page is active in vm_pageout_page_stats().
2003-10-22 18:41:32 +00:00
Alan Cox
d3c09dd7db - Assert that every page found in the active queue is an active page. 2003-10-22 03:08:24 +00:00
Alan Cox
0d42c05ff4 - Assert that the containing vm object is locked in
vm_page_set_validclean().  (This function reads and modifies the
   vm page's valid field, which is synchronized by the lock on the
   containing vm object.)
2003-10-21 19:36:51 +00:00
Alan Cox
fee181a696 - Remove some long unused code. 2003-10-20 18:57:01 +00:00
Alan Cox
3ad8097fd4 - Remove comments referring to functions that no longer exist. 2003-10-20 05:16:27 +00:00
Alan Cox
2bf43e4374 - Hold the vm object's lock around calls to vm_page_set_validclean(). 2003-10-20 04:05:24 +00:00
Alan Cox
1b26eb10ff - Synchronize access to a vm page's valid field using the containing
vm object's lock.
 - Reduce the scope of the vm page queues lock in two places.
2003-10-19 00:01:56 +00:00
Alan Cox
8b575f6c28 - Synchronize access to the page's valid field in
vnode_pager_generic_getpages() using the containing object's lock.
2003-10-18 21:30:29 +00:00
Alan Cox
7a93508274 - Increase the object lock's scope in vm_contig_launder() so that access
to the object's type field and the call to vm_pageout_flush() are
   synchronized.
 - The above change allows for the eliminaton of the last parameter
   to vm_pageout_flush().
 - Synchronize access to the page's valid field in vm_pageout_flush()
   using the containing object's lock.
2003-10-18 21:09:21 +00:00
Alan Cox
cbef13d877 Corrections to revision 1.305
- Specifying VM_MAP_WIRE_HOLESOK should not assume that the start
   address is the beginning of the map.  Instead, move to the first
   entry after the start address.
 - The implementation of VM_MAP_WIRE_HOLESOK was incomplete.  This
   caused the failure of mlockall(2) in some circumstances.
2003-10-18 18:48:17 +00:00
Poul-Henning Kamp
2c18019f14 DuH!
bp->b_iooffset (the spot on the disk), not bp->b_offset (the offset in
the file)
2003-10-18 14:10:28 +00:00
Poul-Henning Kamp
9fbf91c0dd Initialize bp->b_offset before calling VOP_[SPEC]STRATEGY().
Remove stale comment about B_PHYS.
2003-10-18 11:11:05 +00:00
Alan Cox
6989c456b3 - Synchronize access to a vm page's valid field using the containing
vm object's lock.
 - Release the vm object and vm page queues locks around vput().
2003-10-17 05:07:17 +00:00
Alan Cox
c5b65a6723 - vm_fault_copy_entry() should not assume that the source object contains
every page.  If the source entry was read-only, one or more wired pages
   could be in backing objects.
 - vm_fault_copy_entry() should not set the PG_WRITEABLE flag on the page
   unless the destination entry is, in fact, writeable.
2003-10-15 08:00:45 +00:00
Alan Cox
8afcf0cc36 Lock the destination object in vm_fault_copy_entry(). 2003-10-08 07:11:19 +00:00
Alan Cox
669890eaeb Retire vm_page_copy(). Its reason for being ended when peter@ modified
pmap_copy_page() et al. to accept a vm_page_t rather than a physical
address.  Also, this change will facilitate locking access to the vm page's
valid field.
2003-10-08 05:35:12 +00:00
Bruce M Simpson
11f7ddc563 Only the super-user should be able to wire pages via the mlock() family
of system calls at this time.  Remove various #ifdef's to enforce this.
2003-10-06 01:59:04 +00:00
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Alan Cox
9aa3d17d37 The addition of a locking assertion to vm_page_zero_invalid() has revealed
a long-time bug: vm_pager_get_pages() assumes that m[reqpage] contains a
valid page upon return from pgo_getpages().  In the case of the device
pager this page has been freed and replaced by a fake page.  The fake page
is properly inserted into the vm object but m[reqpage] is left pointing
to a freed page.  For now, update m[reqpage] to point to the fake page.

Submitted by:	tegge
2003-10-05 22:23:44 +00:00
Bruce M Simpson
5d264f84f3 Revert previous commit. Come back vslock(), all is forgiven.
Pointy hat to:	bms
2003-10-05 12:41:08 +00:00
Bruce M Simpson
aac7652ecd Retire vslock() and vsunlock() with extreme prejudice.
Discussed with:	pete
2003-10-05 09:47:54 +00:00
Alan Cox
5a3970febf Assert that the containing vm object's lock is held in
vm_page_set_invalid().
2003-10-05 06:58:07 +00:00