1506 Commits

Author SHA1 Message Date
phk
0552c4bec8 Convert VOP_STRATEGY to VOP_SPECSTRATEGY in the generic getpages and
the pager input for small filesystems.
2003-01-05 20:32:03 +00:00
alc
e9e799ad84 Use atomic add and subtract to update the global wired page count,
cnt.v_wire_count.
2003-01-05 01:31:45 +00:00
phk
131885aa2f Temporarily introduce a new VOP_SPECSTRATEGY operation while I try
to sort out disk-io from file-io in the vm/buffer/filesystem space.

The intent is to sort VOP_STRATEGY calls into those which operate
on "real" vnodes and those which operate on VCHR vnodes.  For
the latter kind, the call will be changed to VOP_SPECSTRATEGY,
possibly conditionally for those places where dual-use happens.

Add a default VOP_SPECSTRATEGY method which will call the normal
VOP_STRATEGY.  First time it is called it will print debugging
information.  This will only happen if a normal vnode is passed
to VOP_SPECSTRATEGY by mistake.

Add a real VOP_SPECSTRATEGY in specfs, which does what VOP_STRATEGY
does on a VCHR vnode today.

Add a new VOP_STRATEGY method in specfs to catch instances where
the conversion to VOP_SPECSTRATEGY has not yet happened.  Handle
the request just like we always did, but first time called print
debugging information.

Apart up to two instances of console messages per boot, this amounts
to a glorified no-op commit.

If you get any of the messages on your console I would very much
like a copy of them mailed to phk@freebsd.org
2003-01-04 22:10:36 +00:00
alc
6412b92ef7 Allow kmem_malloc() without Giant if M_NOWAIT is specified. 2003-01-04 19:26:35 +00:00
alc
d34fb3af16 Use vm_object_lock() and vm_object_unlock() in vm_object_deallocate().
(This procedure needs further work, but this change is sufficient for
locking the kmem_object.)
2003-01-04 19:23:19 +00:00
alc
d5b9d0271f Refine the assertions in vm_page_alloc(). 2003-01-04 19:07:13 +00:00
alc
bb9d453ab0 Refine the assertion in vm_object_clear_flag() to allow operation on the
kmem_object without Giant.  In that case, assert that the kmem_object's
mutex is held.
2003-01-03 19:19:08 +00:00
phk
997c254192 Revert use of dmmax_mask, I had overlooked a '~'.
Spotted by:	bde
2003-01-03 19:16:48 +00:00
phk
b884c995db Make struct swblock kernel only, to make vm/swap_pager.h userland includable.
Move struct swdevt from sys/conf.h to the more appropriate vm/swap_pager.h.
Adjust #include use in libkvm and pstat(8) to match.
2003-01-03 16:23:12 +00:00
phk
88bd74d184 Avoid extern decls in .c files by putting them in the vm/swap_pager.h
include file where they belong.
Share the dmmax_mask variable.
2003-01-03 14:30:46 +00:00
phk
2f7baffed6 Use correct _VM_SWAP_PAGER_H_ to check for multiple inclusion. 2003-01-03 14:22:52 +00:00
phk
83581fdc02 Retire sys/dmap.h by including the two lines of it which matters
directly in vm/vm_swap.c.
2003-01-03 09:55:05 +00:00
alc
2db78e226f Lock the vm object when performing vm_object_clear_flag(). 2003-01-03 09:15:43 +00:00
phk
daf6948653 Convert calls to BUF_STRATEGY to VOP_STRATEGY calls. This is a no-op since
all BUF_STRATEGY did in the first place was call VOP_STRATEGY.
2003-01-03 06:32:15 +00:00
alc
ed9a027a4a Add vm map and vm object locking to vmtotal(). 2003-01-03 05:52:02 +00:00
alc
73597ffcc2 Lock the vm object when performing vm_object_clear_flag(). 2003-01-02 09:09:27 +00:00
alc
0912d72a88 Update the assertions in vm_page_insert() and vm_page_lookup() to reflect
locking of the kmem_object.
2003-01-01 19:45:36 +00:00
schweikh
d3367c5f5d Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,
especially in troff files.
2003-01-01 18:49:04 +00:00
alc
c28c69ebf9 Add a needed #include.
Reported by:	ia64 tinderbox
2003-01-01 00:13:01 +00:00
alc
1842adc340 Implement a variant locking scheme for vm maps: Access to system maps
is now synchronized by a mutex, whereas access to user maps is still
synchronized by a lockmgr()-based lock.  Why?  No single type of lock,
including sx locks, meets the requirements of both types of vm map.
Sometimes we sleep while holding the lock on a user map.  Thus, a
a mutex isn't appropriate.  On the other hand, both lockmgr()-based
and sx locks release Giant when a thread/process blocks during
contention for a lock.  This could lead to a race condition in a legacy
driver (that relies on Giant for synchronization) if it attempts to
kmem_malloc() and fails to immediately obtain the lock.  Fortunately,
we never sleep while holding a system map lock.
2002-12-31 19:38:04 +00:00
alc
71afac1c55 - Mark the kernel_map as a system map immediately after its creation.
- Correct a cast.
2002-12-30 05:55:41 +00:00
alc
bc746e81d2 - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.
- Introduce map_sleep_mtx and use it to replace Giant in
   vm_map_unlock_and_wait() and vm_map_wakeup().  (Original
   version by: tegge.)
2002-12-30 00:41:33 +00:00
alc
18e9372579 - Remove vm_object_init2(). It is unused.
- Add a mtx_destroy() to vm_object_collapse().  (This allows a bzero()
   to migrate from _vm_object_allocate() to vm_object_zinit(), where it
   will be performed less often.)
2002-12-29 21:01:14 +00:00
alc
3f894298eb Reduce the number of times that we acquire and release the page queues
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
2002-12-29 07:17:06 +00:00
alc
e053499669 Assert that the page queues lock rather than Giant is held in
vm_page_flag_clear().
2002-12-28 22:49:37 +00:00
dillon
0fe137bfcf vm_pager_put_pages() takes VM_PAGER_* flags, not OBJPC_* flags. It just
so happens that OBJPC_SYNC has the same value as VM_PAGER_PUT_SYNC so no
harm done.  But fix it :-)

No operational changes.

MFC after:	1 day
2002-12-28 21:15:39 +00:00
dillon
1f367998c6 Allow the VM object flushing code to cluster. When the filesystem syncer
comes along and flushes a file which has been mmap()'d SHARED/RW, with
dirty pages, it was flushing the underlying VM object asynchronously,
resulting in thousands of 8K writes.  With this change the VM Object flushing
code will cluster dirty pages in 64K blocks.

Note that until the low memory deadlock issue is reviewed, it is not safe
to allow the pageout daemon to use this feature.  Forced pageouts still
use fs block size'd ops for the moment.

MFC after:	3 days
2002-12-28 21:03:42 +00:00
alc
98df99a71b Two changes to kmem_malloc():
- Use VM_ALLOC_WIRED.
 - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
2002-12-28 19:03:54 +00:00
alc
9a94897a13 - Change vm_object_page_collect_flush() to assert rather than
acquire the page queues lock.
 - Acquire the page queues lock in vm_object_page_clean().
2002-12-27 20:16:13 +00:00
alc
b9ffe85ed2 Increase the scope of the page queues lock in phys_pager_getpages(). 2002-12-27 06:09:56 +00:00
alc
43045f3d3b - Hold the page queues lock around calls to vm_page_flag_clear(). 2002-12-24 19:02:03 +00:00
alc
f47bf7602e - Hold the page queues lock around vm_page_wakeup(). 2002-12-24 04:24:58 +00:00
alc
8b4847e64e - Hold the kernel_object's lock around vm_page_insert(..., kernel_object,
...).
2002-12-23 20:39:15 +00:00
alc
c5121aa082 Eliminate some dead code. (Any possible use for this code died with
vm/vm_page.c revision 1.220.)

Submitted by:	bde
2002-12-23 04:35:38 +00:00
dillon
2afef11c57 The UP -current was not properly counting the per-cpu VM stats in the
sysctl code.  This makes 'systat -vm 1's syscall count work again.

Submitted by:	Michal Mertl <mime@traveller.cz>
Note:		also slated for 5.0
2002-12-22 05:04:30 +00:00
alc
cf1d112d8e Increase the scope of the kmem_object locking in kmem_malloc(). 2002-12-20 18:59:23 +00:00
alc
380a78911f Add a mutex to struct vm_object. Initialize and destroy that mutex
at appropriate times.  For the moment, the mutex is only used on
the kmem_object.
2002-12-20 05:10:32 +00:00
alc
5b0b01cd7c Remove the hash_rand field from struct vm_object. As of revision 1.215 of
vm/vm_page.c, it is unused.
2002-12-19 20:01:22 +00:00
alc
6bff4b9a47 - Remove vm_page_sleep_busy(). The transition to vm_page_sleep_if_busy(),
which incorporates page queue and field locking, is complete.
 - Assert that the page queue lock rather than Giant is held in
   vm_page_flag_set().
2002-12-19 07:23:46 +00:00
alc
3f31cbe67c - Hold the page queues lock when performing vm_page_busy() or
vm_page_flag_set().
 - Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-19 01:20:24 +00:00
alc
9fcc701ba3 - Hold the page queues lock when performing vm_page_busy().
- Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-18 04:39:15 +00:00
alc
a0ec4e5670 Hold the page queues lock when performing vm_page_flag_set(). 2002-12-18 04:02:02 +00:00
alc
09d11f3af3 Hold the page queues lock when performing vm_page_flag_set(). 2002-12-17 19:55:28 +00:00
dillon
be3db49c80 Change the way ELF coredumps are handled. Instead of unconditionally
skipping read-only pages, which can result in valuable non-text-related
data not getting dumped, the ELF loader and the dynamic loader now mark
read-only text pages NOCORE and the coredump code only checks (primarily) for
complete inaccessibility of the page or NOCORE being set.

Certain applications which map large amounts of read-only data will
produce much larger cores.  A new sysctl has been added,
debug.elf_legacy_coredump, which will revert to the old behavior.

This commit represents collaborative work by all parties involved.
The PR contains a program demonstrating the problem.

PR:		kern/45994
Submitted by:	"Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org>
Reviewed by:	jdp, dillon
MFC after:	7 days
2002-12-16 19:24:43 +00:00
alc
fe836ef10b Perform vm_object_lock() and vm_object_unlock() on kmem_object
around vm_page_lookup() and vm_page_free().
2002-12-15 21:09:09 +00:00
dillon
b43fb3e920 This is David Schultz's swapoff code which I am finally able to commit.
This should be considered highly experimental for the moment.

Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
MFC after:	3 weeks
2002-12-15 19:17:57 +00:00
dillon
2925e70a14 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
alc
7c1f8a0482 As per the comments, vm_object_page_remove() now expects its caller to lock
the object (i.e., acquire Giant).
2002-12-15 07:30:51 +00:00
alc
bbbd1418d8 Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 07:16:51 +00:00
alc
6be448f264 Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 05:41:56 +00:00