Commit Graph

1042 Commits

Author SHA1 Message Date
phk
ff20b1e15d Introduce swopen to prevent blockdevice opens and insist on minor==0. 1999-10-04 13:09:30 +00:00
phk
1c8b44f4ce Give the swap device a D_DISK flag against my better judgement.
TODO: add an open routing which fails for bdev opens.
1999-10-04 12:27:58 +00:00
dt
b99bff68d4 Plug an accounting leak: count pages in ZONE_INTERRUPT zones as wired. 1999-09-30 07:35:50 +00:00
phk
e9e0512210 Remove five now unused fields from struct cdevsw. They should never
have been there in the first place.  A GENERIC kernel shrinks almost 1k.

Add a slightly different safetybelt under nostop for tty drivers.

Add some missing FreeBSD tags
1999-09-25 18:24:47 +00:00
dillon
37bee3bb3f cleanup madvise code, add a few more sanity checks.
Reviewed by:	Alan Cox <alc@cs.rice.edu>,  dg@root.com
1999-09-21 05:00:48 +00:00
dillon
493548f6b6 Final commit to remove vnode->v_lastr. vm_fault now handles read
clustering issues (replacing code that used to be in
    ufs/ufs/ufs_readwrite.c).  vm_fault also now uses the new VM page counter
    inlines.

    This completes the changeover from vnode->v_lastr to vm_entry_t->v_lastr
    for VM, and fp->f_nextread and fp->f_seqcount (which have been in the
    tree for a while).  Determination of the I/O strategy (sequential, random,
    and so forth) is now handled on a descriptor-by-descriptor basis for
    base I/O calls, and on a memory-region-by-memory-region and
    process-by-process basis for VM faults.

Reviewed by:	David Greenman <dg@root.com>, Alan Cox <alc@cs.rice.edu>
1999-09-21 00:36:16 +00:00
dillon
e8761b4e05 Fix bug in pipe code relating to writes of mmap'd but illegal address
spaces which cross a segment boundry in the page table.  pmap_kextract()
    is not designed for access to the user space portion of the page
    table and cannot handle the null-page-directory-entry case.

    The fix is to have vm_fault_quick() return a success or failure which
    is then used to avoid calling pmap_kextract().
1999-09-20 19:08:48 +00:00
dillon
4c1e280de7 Remove inappropriate VOP_FSYNC from vm_object_page_clean(). The fsync
syncs the entire underlying file rather then just the requested range,
    resulting in huge inefficiencies when the VM system is articulated in
    a certain way.  The VOP_FSYNC was also found to massively reduce NFS
    performance in certain cases.

    Change MADV_DONTNEED and MADV_FREE to call vm_page_dontneed() instead
    of vm_page_deactivate().  Using vm_page_deactivate() causes all
    inactive and cache pages to be recycled before the dontneed/free page
    is recycled, effectively flushing our entire VM inactive & cache
    queues continuously even if only a few pages are being actively MADV
    free'd and reused (such as occurs with a sequential scan of a
    memory-mapped file).

Reviewed by:	Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
1999-09-17 05:48:36 +00:00
dillon
d1666d7f52 Add 'lastr' field to vm_map_entry in preparation for its removal
from the vnode.  (The changeover is undergoing final testing and
    will be committed soon).

Reviewed by:	Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
1999-09-17 05:40:17 +00:00
dillon
124cdf5346 The vnode pager (used when you do file-backed mmaps) must use the
underlying physical sector size when aligning I/O transfer sizes.
    It cannot assume 512 bytes.

    We assume the underlying sector size is a power of 2.  If it isn't,
    mmap() will break badly anyway (in the same way mmap broke with NFS
    when NFS tried to cache piecemeal write ranges in buffers, before
    we enforced read-buffer-before-write-piecemeal for NFS).

Reviewed by:	Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
1999-09-17 05:17:59 +00:00
dillon
7a0052d268 Fix a number of spl bugs related to reserving and freeing swap space.
Swap space can be freed from an interrupt and so swap reservation and
    freeing must occur at splvm.

    Add swap_pager_reserve() code to support a new swap pre-reservation
    capability for the VN device.

    Generally cleanup the swap code by simplifying the swp_pager_meta_build()
    static function and consolidating the SWAPBLK_NONE test from a bit test
    to an absolute compare.  The bit test was left over from a rejected
    swap allocation scheme that was not ultimately committed.  A few other
    minor cleanups were also made.

    Reorganize the swap strategy code, again for VN support, to not
    reallocate swap when writing as this messes up pre-reservation and
    can fragment I/O unnecessarily as VN-baesd disk is messed around with.

Reviewed by:	Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
1999-09-17 05:09:24 +00:00
dillon
6c2a557edd Add required BUF_KERNPROC to flushchainbuf() to disassociate the
current process from the exclusive lock prior to initiating I/O.

    This fixes a panic related to swap-backed VN disks

Reviewed by:	Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
1999-09-17 05:03:27 +00:00
dillon
4cb1921c9b Reviewed by: Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
Replace various VM related page count calculations strewn over the
    VM code with inlines to aid in readability and to reduce fragility
    in the code where modules depend on the same test being performed
    to properly sleep and wakeup.

    Split out a portion of the page deactivation code into an inline
    in vm_page.c to support vm_page_dontneed().

    add vm_page_dontneed(), which handles the madvise MADV_DONTNEED
    feature in a related commit coming up for vm_map.c/vm_object.c.  This
    code prevents degenerate cases where an essentially active page may
    be rotated through a subset of the paging lists, resulting in premature
    disposal.
1999-09-17 04:56:40 +00:00
peter
3b842d34e8 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
phk
591c94d4c6 Simplify the handling of VCHR and VBLK vnodes using the new dev_t:
Make the alias list a SLIST.

        Drop the "fast recycling" optimization of vnodes (including
        the returning of a prexisting but stale vnode from checkalias).
        It doesn't buy us anything now that we don't hardlimit
        vnodes anymore.

        Rename checkalias2() and checkalias() to addalias() and
        addaliasu() - which takes dev_t and udev_t arg respectively.

        Make the revoke syscalls use vcount() instead of VALIASED.

        Remove VALIASED flag, we don't need it now and it is faster
        to traverse the much shorter lists than to maintain the
        flag.

        vfs_mountedon() can check the dev_t directly, all the vnodes
        point to the same one.

Print the devicename in specfs/vprint().

Remove a couple of stale LFS vnode flags.

Remove unimplemented/unused LK_DRAINED;
1999-08-26 14:53:31 +00:00
green
a788068fed When the SYSINIT() was removed, it was replaced with a make_dev on-demand
creation of /dev/drum via calling swapon. However, the make_dev has a
bogus (insofar that it hasn't been added yet) cdevsw, so later we end
up crashing with a null pointer dereference on the swap vp's specinfo.
The specinfo points to a dev_t with a major of 254 (uninitialized), and
we get a crash on its d_strategy being called.

The simple solution to this is to call cdevsw_add before the make_dev
is ever used. This fixes the panic which occurred upon swapping.
1999-08-24 05:58:35 +00:00
bde
aff861d544 Use devtoname to print dev_t's instead of casting them to u_long for
misprinting with %lx.

Cast pointers to intptr_t instead of casting them to long.  Cosmetic.
1999-08-23 23:55:03 +00:00
phk
663cbe4fc2 Convert DEVFS hooks in (most) drivers to make_dev().
Diskslice/label code not yet handled.

Vinum, i4b, alpha, pc98 not dealt with (left to respective Maintainers)

Add the correct hook for devfs to kern_conf.c

The net result of this excercise is that a lot less files depends on DEVFS,
and devtoname() gets more sensible output in many cases.

A few drivers had minor additional cleanups performed relating to cdevsw
registration.

A few drivers don't register a cdevsw{} anymore, but only use make_dev().
1999-08-23 20:59:21 +00:00
alc
42583848d0 Correct the inconsistent formatting in struct vm_map.
Addendum to rev 1.47: submitted by dillon.
1999-08-23 18:16:05 +00:00
alc
980da3c115 struct vm_map:
The lock structure cannot be the first element of the vm_map
	because this can result in livelock between two or more system
	processes trying to kmem_alloc_wait.
1999-08-23 18:08:34 +00:00
alc
4039114e6f Remove two unused variable declarations. 1999-08-22 00:01:46 +00:00
alc
e09a564be4 vm_page_alloc and contigmalloc1:
Verify that free pages are not dirty.

Submitted by:	dillon
1999-08-20 06:32:00 +00:00
peter
9ad9e224d9 Update for run queue code. 1999-08-19 00:15:27 +00:00
mjacob
c3587e5dcb Fix breakage - an extra brace got inserted where DIAGNOSTIC was defined
but MAP_LOCK_DIAGNOSTIC wasn't.
1999-08-18 03:56:57 +00:00
green
30c109da4d Unbreak the nfs KLD_MODULE. It needs a bit more of vm_page.h than was
exported (notably vm_page_undirty()). Also, let vm_page_dirty() work
in a KLD.
1999-08-17 22:48:10 +00:00
alc
eacdecd413 vm_page_free_toq:
Update the comment to reflect the demise of PQ_ZERO and
	remove a (now) useless test.
1999-08-17 18:09:01 +00:00
alc
fb63bd8ded Correct an accidental omission of one "vm_page_undirty" replacement
from the previous commit.
1999-08-17 05:56:00 +00:00
alc
d64b069518 vm_page_free_toq:
Clear the dirty bit mask (vm_page_undirty) before adding the page
	to the free page queue.

Submitted by:	dillon
1999-08-17 05:08:39 +00:00
alc
075745f2e2 Add the (inline) function vm_page_undirty for clearing the dirty bitmask
of a vm_page.

Use it.

Submitted by:	dillon
1999-08-17 04:02:34 +00:00
alc
33f15d9b59 vm_pageout_clean:
Remove dead code.

Submitted by:	dillon
1999-08-17 00:07:35 +00:00
alc
708e372c9a vm_map_lock*:
Remove semicolons or add "do { } while (0)" as necessary
	to enable the use of these macros in arbitrary statements.
	(There are no functional changes.)

Submitted by:	dillon
1999-08-16 18:21:09 +00:00
alc
a244527220 Remove the declarations for "vm_map_t io_map". It's been unused
since i386/i386/machdep rev 1.310, i.e., the demise of BOUNCE_BUFFERS.
1999-08-15 23:55:46 +00:00
alc
d7ab53a49a Remove the declarations for "vm_map_t u_map". It's been unused
since i386/i386/pmap rev 1.190.  (The alpha never used it.)
1999-08-15 21:55:20 +00:00
alc
816a1752a0 contigmalloc1 (currently) depends on PQ_FREE and PQ_CACHE not being 0
to tell a valid "struct vm_page" from an invalid one in the vm_page_array.
This isn't a very robust method.
1999-08-15 05:36:43 +00:00
mjacob
9c963d1884 Add back in old definitions if we're compiling for alpha. 1999-08-15 01:16:53 +00:00
alc
70b651dfc3 Don't create a "struct vpgqueues" for PQ_NONE. 1999-08-14 06:25:54 +00:00
alc
7fd2f69ac5 vm_map_madvise:
A complete rewrite by dillon and myself to separate
	the implementation of behaviors that effect the vm_map_entry
	from those that effect the vm_object.

	A result of this change is that madvise(..., MADV_FREE);
	is much cheaper.
1999-08-13 17:45:34 +00:00
phk
7b7ae40370 The bdevsw() and cdevsw() are now identical, so kill the former. 1999-08-13 10:29:38 +00:00
alc
17f88167e4 Make the default page coloring parameters match a (non-Xeon) Pentium II/III.
This setting is also acceptable for Celerons and Pentium Pros
with less than 1MB L2 caches.

Note: PQ_L2_SIZE is a misnomer.  The correct number of colors is
a function of the cache's degree of associativity as well as its size.

Submitted by:	bde and alc
1999-08-12 21:16:53 +00:00
alc
0740e2b138 vm_object_madvise:
Update the comments to match the implementation.

Submitted by:	dillon
1999-08-12 08:22:57 +00:00
alc
e3c495d003 vm_object_madvise:
Support MADV_DONTNEED and MADV_WILLNEED on object types
	besides OBJT_DEFAULT and OBJT_SWAP.

Submitted by:	dillon
1999-08-12 06:33:56 +00:00
alc
157bb2131d contigmalloc1:
If a page is found in the wrong queue, panic instead
	of silently ignoring the problem.
1999-08-11 05:12:00 +00:00
peter
e657118d0b Add a contigfree() as a corollary to contigmalloc() as it's not clear
which free routine to use and people are tempted to use free() (which
doesn't work)
1999-08-10 22:21:13 +00:00
alc
0524a59178 vm_map_madvise:
Now that behaviors are stored in the vm_map_entry rather than
	the vm_object, it's no longer necessary to instantiate a vm_object
	just to hold the behavior.

Reviewed by:	dillon
1999-08-10 04:50:20 +00:00
phk
ee871b6440 Merge the cons.c and cons.h to the best of my ability. alpha may or
may not compile, I can't test it.
1999-08-09 10:35:05 +00:00
phk
e938d317d5 Decommision miscfs/specfs/specdev.h. Most of it goes into <sys/conf.h>,
a few lines into <sys/vnode.h>.

Add a few fields to struct specinfo, paving the way for the fun part.
1999-08-08 18:43:05 +00:00
alc
33da09bf48 Move the memory access behavior information provided by madvise
from the vm_object to the vm_map.

Submitted by:	dillon
1999-08-01 06:05:09 +00:00
alc
5180aa1c5c Change the type of vpgqueues::lcnt from "int *" to "int". The indirection
served no purpose.
1999-07-31 18:31:00 +00:00
alc
5c6321aca5 vm_page_queue_init:
Remove the initialization of PQ_NONE's cnt and lcnt.  They aren't
	used.

vm_page_insert:
	Remove an unnecessary dereference.

vm_page_wire:
	Remove the one and only (and thus pointless) reference
	to PQ_NONE's lcnt.
1999-07-31 04:19:49 +00:00
alc
9cda945475 Reduce the number of "magic constants" used for page coloring
by one: PQ_PRIME2 and PQ_PRIME3 are used to accomplish the same
thing at different places in the kernel.  Drop PQ_PRIME3.
1999-07-22 06:04:17 +00:00
alc
279eafd09f Fix the following problem:
When creating new processes (or performing exec), the new page
directory is initialized too early.  The kernel might grow before
p_vmspace is initialized for the new process.  Since pmap_growkernel
doesn't yet know about the new page directory, it isn't updated, and
subsequent use causes a failure.

The fix is (1) to clear p_vmspace early, to stop pmap_growkernel
from stomping on memory, and (2) to defer part of the initialization
of new page directories until p_vmspace is initialized.

PR:		kern/12378
Submitted by:	tegge
Reviewed by:	dfr
1999-07-21 18:02:27 +00:00
green
bf26dc7074 Make a dev2budev() function, and use it. This refixes pstat (working, broken,
working, broken, working) and savecore (working, working, broken, working,
working).

Sorta Reviewed by:	phk
1999-07-20 21:29:13 +00:00
alc
db0a7d40ad Convert a "page not busy" warning to an assertion.
Submitted by:	dillon@backplane.com
1999-07-20 05:46:56 +00:00
phk
09327a40f9 Add a field to struct swdevt to avoid a bogus udev2dev() call. 1999-07-17 19:59:55 +00:00
phk
6c373ff516 I have not one single time remembered the name of this function correctly
so obviously I gave it the wrong name.  s/umakedev/makeudev/g
1999-07-17 18:43:50 +00:00
alc
ab1033c835 Remove vm_object::last_read. It is used by the old swap pager, but
not by the new one, i.e., vm/swap_pager.c rev 1.108.

Reviewed by:	dillon@backplane.com
1999-07-16 05:11:37 +00:00
alc
c7285cff59 Cleanup OBJ_ONEMAPPING management.
vm_map.c:
	Don't set OBJ_ONEMAPPING on arbitrary vm objects.  Only default
	and swap type vm objects should have it set.  vm_object_deallocate
	already handles these cases.

vm_object.c:
	If OBJ_ONEMAPPING isn't already clear in vm_object_shadow,
	we are in trouble.  Instead of clearing it, make it
	an assertion that it is already clear.
1999-07-11 18:30:32 +00:00
alc
b0f696a574 Change the data type used to represent page color in the vm_object
to be the same as that used in the vm_page.  (This change also
shrinks the vm_object.)
1999-07-10 18:29:18 +00:00
alc
bcd454da95 Remove unused function prototypes. 1999-07-10 18:16:08 +00:00
ache
41c66e4188 add unused argument to udev2dev() to make kernel compiled 1999-07-07 09:12:44 +00:00
msmith
2d9dfcac47 Reinstate the previous fix for the broken export of a dev_t in sw_dev, convert
back to a dev_t when the value is actually used.
1999-07-07 04:07:03 +00:00
green
0d33984f04 Back out previous commit. It was wrong, and caused panics. 1999-07-07 03:03:59 +00:00
msmith
0ca35a4fe1 swdevt should contain a udev_t not a devt. This resulted in bogus
swap device name reporting.

Submitted by:	Bill Swingle <unfurl@freebsd.org>
1999-07-06 23:51:02 +00:00
mckay
4eda78adf2 Reformat previous fix to remove an uglier than average goto.
Looked OK to:	dg
1999-07-05 12:50:54 +00:00
mckusick
9d4f0d78fa The buffer queue mechanism has been reformulated. Instead of having
QUEUE_AGE, QUEUE_LRU, and QUEUE_EMPTY we instead have QUEUE_CLEAN,
QUEUE_DIRTY, QUEUE_EMPTY, and QUEUE_EMPTYKVA.  With this patch clean
and dirty buffers have been separated.  Empty buffers with KVM
assignments have been separated from truely empty buffers.  getnewbuf()
has been rewritten and now operates in a 100% optimal fashion.  That is,
it is able to find precisely the right kind of buffer it needs to
allocate a new buffer, defragment KVM, or to free-up an existing buffer
when the buffer cache is full (which is a steady-state situation for
the buffer cache).

Buffer flushing has been reorganized.  Previously buffers were flushed
in the context of whatever process hit the conditions forcing buffer
flushing to occur.  This resulted in processes blocking on conditions
unrelated to what they were doing.  This also resulted in inappropriate
VFS stacking chains due to multiple processes getting stuck trying to
flush dirty buffers or due to a single process getting into a situation
where it might attempt to flush buffers recursively - a situation that
was only partially fixed in prior commits.  We have added a new daemon
called the buf_daemon which is responsible for flushing dirty buffers
when the number of dirty buffers exceeds the vfs.hidirtybuffers limit.
This daemon attempts to dynamically adjust the rate at which dirty buffers
are flushed such that getnewbuf() calls (almost) never block.

The number of nbufs and amount of buffer space is now scaled past the
8MB limit that was previously imposed for systems with over 64MB of
memory, and the vfs.{lo,hi}dirtybuffers limits have been relaxed
somewhat.  The number of physical buffers has been increased with the
intention that we will manage physical I/O differently in the future.

reassignbuf previously attempted to keep the dirtyblkhd list sorted which
could result in non-deterministic operation under certain conditions,
such as when a large number of dirty buffers are being managed.  This
algorithm has been changed.  reassignbuf now keeps buffers locally sorted
if it can do so cheaply, and otherwise gives up and adds buffers to
the head of the dirtyblkhd list.  The new algorithm is deterministic but
not perfect.  The new algorithm greatly reduces problems that previously
occured when write_behind was turned off in the system.

The P_FLSINPROG proc->p_flag bit has been replaced by the more descriptive
P_BUFEXHAUST bit.  This bit allows processes working with filesystem
buffers to use available emergency reserves.  Normal processes do not set
this bit and are not allowed to dig into emergency reserves.  The purpose
of this bit is to avoid low-memory deadlocks.

A small race condition was fixed in getpbuf() in vm/vm_pager.c.

Submitted by:	Matthew Dillon <dillon@apollo.backplane.com>
Reviewed by:	Kirk McKusick <mckusick@mckusick.com>
1999-07-04 00:25:38 +00:00
peter
18cf795f27 Fix some int/long printf problems for the Alpha 1999-07-01 19:53:43 +00:00
peter
6cb5fe6c6b Slight reorganization of kernel thread/process creation. Instead of using
SYSINIT_KT() etc (which is a static, compile-time procedure), use a
NetBSD-style kthread_create() interface.  kproc_start is still available
as a SYSINIT() hook.  This allowed simplification of chunks of the
sysinit code in the process.  This kthread_create() is our old kproc_start
internals, with the SYSINIT_KT fork hooks grafted in and tweaked to work
the same as the NetBSD one.

One thing I'd like to do shortly is get rid of nfsiod as a user initiated
process.  It makes sense for the nfs client code to create them on the
fly as needed up to a user settable limit.  This means that nfsiod
doesn't need to be in /sbin and is always "available".  This is a fair bit
easier to do outside of the SYSINIT_KT() framework.
1999-07-01 13:21:46 +00:00
peter
5d3d376190 Kirk missed a required BUF_KERNPROC(). Even though this is a non-async
transfer, the b_iodone hook causes biodone() to release it from interrupt
context.
1999-06-27 22:08:38 +00:00
peter
6d9ab211eb Minor tweaks to make sure (new) prerequisites for <sys/buf.h> (mostly
splbio()/splx()) are #included in time.
1999-06-27 11:44:22 +00:00
peter
e77d71685c There isn't much point waking up a daemon that hasn't existed since
softupdates came in.  Try calling speedup_syncer() instead..
1999-06-26 14:56:58 +00:00
mckusick
5b58f2f951 Convert buffer locking from using the B_BUSY and B_WANTED flags to using
lockmgr locks. This commit should be functionally equivalent to the old
semantics. That is, all buffer locking is done with LK_EXCLUSIVE
requests. Changes to take advantage of LK_SHARED and LK_RECURSIVE will
be done in future commits.
1999-06-26 02:47:16 +00:00
alc
f972513d8f Remove (1) "extern" declarations for variables that were previously
made "static" and (2) initialized but unused variables.
1999-06-22 07:18:20 +00:00
alc
72c18c01ff Remove vm_object::cache_count and vm_object::wired_count. They are
not used.  (Nor is there any planned use by John who introduced them.)

Reviewed by:	"John S. Dyson" <toor@dyson.iquest.net>
1999-06-20 21:47:02 +00:00
alc
aa61764b70 Set cnt.v_page_size to PAGE_SIZE rather than DEFAULT_PAGE_SIZE so that
"vmstat -s" reports the correct value on the Alpha.

Submitted by:	Hidetoshi Shimokawa <simokawa@sat.t.u-tokyo.ac.jp>
1999-06-20 04:55:29 +00:00
alc
c9ce3ad902 Remove some unused function and variable declarations. 1999-06-19 18:42:53 +00:00
alc
90cdb68b0a vm_map_growstack uses vmspace::vm_ssize as though it contained
the stack size in bytes when in fact it is the stack size in pages.
1999-06-17 21:29:38 +00:00
alc
2f13d2c4a2 vm_map_insert sometimes extends an existing vm_map entry, rather than
creating a new entry.  vm_map_stack and vm_map_growstack can panic when
a new entry isn't created.  Fixed vm_map_stack and vm_map_growstack.

Also, when extending the stack, always set the protection to VM_PROT_ALL.
1999-06-17 05:49:00 +00:00
alc
9fb63b7080 Move vm_map_stack and vm_map_growstack after the definition
of the vm_map_clip_end macro.  (The next commit will modify
vm_map_stack and vm_map_growstack to use vm_map_clip_end.)
1999-06-17 00:39:26 +00:00
alc
bdb6bd3066 Remove some unused declarations and duplicate initialization. 1999-06-17 00:27:39 +00:00
alc
15096c63d6 vm_map_protect:
The wrong vm_map_entry is used to determine if writes must not be
	allowed due to COW.
1999-06-12 23:10:38 +00:00
dt
309c487efd Add a function kmem_alloc_nofault() - same as kmem_alloc_pageable(), but
create a nofault entry. It will be used to allocate kmem for upages.

(I am not too happy with all this, but it's better than nothing).
1999-06-08 17:03:28 +00:00
alc
3d024af49f vm_mmap:
Insure that device mappings get MAP_PREFAULT(_PARTIAL) set,
	so that 4M page mappings are used when possible.

Reviewed by:	Luoqi Chen <luoqi@watermarkgroup.com>
1999-06-05 18:21:53 +00:00
phk
1efee9b283 Shorten a detour around dev_t to get a udev_t created. 1999-06-01 17:11:27 +00:00
phk
6a5dc97620 Simplify cdevsw registration.
The cdevsw_add() function now finds the major number(s) in the
struct cdevsw passed to it.  cdevsw_add_generic() is no longer
needed, cdevsw_add() does the same thing.

cdevsw_add() will print an message if the d_maj field looks bogus.

Remove nblkdev and nchrdev variables.  Most places they were used
bogusly.  Instead check a dev_t for validity by seeing if devsw()
or bdevsw() returns NULL.

Move bdevsw() and devsw() functions to kern/kern_conf.c

Bump __FreeBSD_version to 400006

This commit removes:
        72 bogus makedev() calls
        26 bogus SYSINIT functions

if_xe.c bogusly accessed cdevsw[], author/maintainer please fix.

I4b and vinum not changed.  Patches emailed to authors.  LINT
probably broken until they catch up.
1999-05-31 11:29:30 +00:00
phk
7e4a9dced9 This commit should be a extensive NO-OP:
Reformat and initialize correctly all "struct cdevsw".

        Initialize the d_maj and d_bmaj fields.

        The d_reset field was not removed, although it is never used.

I used a program to do most of this, so all the files now use the
same consistent format.  Please keep it that way.

Vinum and i4b not modified, patches emailed to respective authors.
1999-05-30 16:53:49 +00:00
alc
a20319a2ae Addendum to 1.155. Verify the existence of the object before checking
its reference count.
1999-05-30 01:12:19 +00:00
alc
2bb1b6a2ae Avoid the creation of unnecessary shadow objects. 1999-05-28 03:39:44 +00:00
alc
016bcfa9ed vm_map_insert:
General cleanup.  Eliminate coalescing checks that are duplicated
	by vm_object_coalesce.
1999-05-18 05:38:48 +00:00
alc
936a553033 Add the options MAP_PREFAULT and MAP_PREFAULT_PARTIAL to vm_map_find/insert,
eliminating the need for the pmap_object_init_pt calls in imgact_* and
mmap.

Reviewed by:	David Greenman <dg@root.com>
1999-05-17 00:53:56 +00:00
alc
83a9863079 Remove prototypes for functions that don't exist anymore (vm_map.h).
Remove a useless argument from vm_map_madvise's interface (vm_map.c,
	vm_map.h, and vm_mmap.c).

Remove a redundant test in vm_uiomove (vm_map.c).

Make two changes to vm_object_coalesce:

1. Determine whether the new range of pages actually overlaps
the existing object's range of pages before calling vm_object_page_remove.
(Prior to this change almost 90% of the calls to vm_object_page_remove
were to remove pages that were beyond the end of the object.)

2. Free any swap space allocated to removed pages.
1999-05-16 05:07:34 +00:00
dt
c936e75142 Fix confusion of size of transfer with size of the pager.
PR:		11658
Broken in:	1.89 (1998/03/07)
1999-05-15 23:42:39 +00:00
alc
e79d7db00b Simplify vm_map_find/insert's interface: remove the MAP_COPY_NEEDED option.
It never makes sense to specify MAP_COPY_NEEDED without also specifying
MAP_COPY_ON_WRITE, and vice versa.  Thus, MAP_COPY_ON_WRITE suffices.

Reviewed by:	David Greenman <dg@root.com>
1999-05-14 23:09:34 +00:00
bde
bdbf14bb39 Casting handles from void * to uintptr_t on the way to dev_t became
especially bogus when dev_t became a pointer.
1999-05-13 12:55:37 +00:00
luoqi
923ece6a8c Device pager's handle is dev_t not udev_t. 1999-05-13 04:02:07 +00:00
phk
760193e96e Fix a udev_t/dev_t mismatch which prevent paging from working. 1999-05-12 11:05:23 +00:00
phk
7e26ca1d1a Divorce "dev_t" from the "major|minor" bitmap, which is now called
udev_t in the kernel but still called dev_t in userland.

Provide functions to manipulate both types:
        major()         umajor()
        minor()         uminor()
        makedev()       umakedev()
        dev2udev()      udev2dev()

For now they're functions, they will become in-line functions
after one of the next two steps in this process.

Return major/minor/makedev to macro-hood for userland.

Register a name in cdevsw[] for the "filedescriptor" driver.

In the kernel the udev_t appears in places where we have the
major/minor number combination, (ie: a potential device: we
may not have the driver nor the device), like in inodes, vattr,
cdevsw registration and so on, whereas the dev_t appears where
we carry around a reference to a actual device.

In the future the cdevsw and the aliased-from vnode will be hung
directly from the dev_t, along with up to two softc pointers for
the device driver and a few houskeeping bits.  This will essentially
replace the current "alias" check code (same buck, bigger bang).

A little stunt has been provided to try to catch places where the
wrong type is being used (dev_t vs udev_t), if you see something
not working, #undef DEVT_FASCIST in kern/kern_conf.c and see if
it makes a difference.  If it does, please try to track it down
(many hands make light work) or at least try to reproduce it
as simply as possible, and describe how to do that.

Without DEVT_FASCIST I belive this patch is a no-op.

Stylistic/posixoid comments about the userland view of the <sys/*.h>
files welcome now, from userland they now contain the end result.

Next planned step: make all dev_t's refer to the same devsw[] which
means convert BLK's to CHR's at the perimeter of the vnodes and
other places where they enter the game (bootdev, mknod, sysctl).
1999-05-11 19:55:07 +00:00
phk
91e7a75ba8 No point in swapdev being a static global when used only locally. 1999-05-09 17:28:00 +00:00
phk
500e41bd71 I got tired of seeing all the cdevsw[major(foo)] all over the place.
Made a new (inline) function devsw(dev_t dev) and substituted it.

Changed to the BDEV variant to this format as well: bdevsw(dev_t dev)

DEVFS will eventually benefit from this change too.
1999-05-08 06:40:31 +00:00
phk
693dd58bb3 Continue where Julian left off in July 1998:
Virtualize bdevsw[] from cdevsw.  bdevsw() is now an (inline)
        function.

        Join CDEV_MODULE and BDEV_MODULE to DEV_MODULE (please pay attention
        to the order of the cmaj/bmaj arguments!)

        Join CDEV_DRIVER_MODULE and BDEV_DRIVER_MODULE to DEV_DRIVER_MODULE
        (ditto!)

(Next step will be to convert all bdev dev_t's to cdev dev_t's
before they get to do any damage^H^H^H^H^H^Hwork in the kernel.)
1999-05-07 10:11:40 +00:00
phk
7f79e0b14a Introduce two functions: physread() and physwrite() and use these directly
in *devsw[] rather than the 46 local copies of the same functions.

(grog will do the same for vinum when he has time)
1999-05-07 07:03:47 +00:00
peter
d247469429 Add brackets to silence egcs and help clarity. 1999-05-06 22:06:45 +00:00
phk
f57a01ebfc remove b_proc from struct buf, it's (now) unused.
Reviewed by:	dillon, bde
1999-05-06 20:00:34 +00:00
luoqi
148df5d8ab Don't ignore mmap() address hint below the text section. 1999-05-06 00:46:19 +00:00
billf
dd35516544 Add sysctl descriptions to many SYSCTL_XXXs
PR:		kern/11197
Submitted by:	Adrian Chadd <adrian@FreeBSD.org>
Reviewed by:	billf(spelling/style/minor nits)
Looked at by:	bde(style)
1999-05-03 23:57:32 +00:00
alc
5cb08a2652 The VFS/BIO subsystem contained a number of hacks in order to optimize
piecemeal, middle-of-file writes for NFS.  These hacks have caused no
end of trouble, especially when combined with mmap().  I've removed
them.  Instead, NFS will issue a read-before-write to fully
instantiate the struct buf containing the write.  NFS does, however,
optimize piecemeal appends to files.  For most common file operations,
you will not notice the difference.  The sole remaining fragment in
the VFS/BIO system is b_dirtyoff/end, which NFS uses to avoid cache
coherency issues with read-merge-write style operations.  NFS also
optimizes the write-covers-entire-buffer case by avoiding the
read-before-write.  There is quite a bit of room for further
optimization in these areas.

The VM system marks pages fully-valid (AKA vm_page_t->valid =
VM_PAGE_BITS_ALL) in several places, most noteably in vm_fault.  This
is not correct operation.  The vm_pager_get_pages() code is now
responsible for marking VM pages all-valid.  A number of VM helper
routines have been added to aid in zeroing-out the invalid portions of
a VM page prior to the page being marked all-valid.  This operation is
necessary to properly support mmap().  The zeroing occurs most often
when dealing with file-EOF situations.  Several bugs have been fixed
in the NFS subsystem, including bits handling file and directory EOF
situations and buf->b_flags consistancy issues relating to clearing
B_ERROR & B_INVAL, and handling B_DONE.

getblk() and allocbuf() have been rewritten.  B_CACHE operation is now
formally defined in comments and more straightforward in
implementation.  B_CACHE for VMIO buffers is based on the validity of
the backing store.  B_CACHE for non-VMIO buffers is based simply on
whether the buffer is B_INVAL or not (B_CACHE set if B_INVAL clear,
and vise-versa).  biodone() is now responsible for setting B_CACHE
when a successful read completes.  B_CACHE is also set when a bdwrite()
is initiated and when a bwrite() is initiated.  VFS VOP_BWRITE
routines (there are only two - nfs_bwrite() and bwrite()) are now
expected to set B_CACHE.  This means that bowrite() and bawrite() also
set B_CACHE indirectly.

There are a number of places in the code which were previously using
buf->b_bufsize (which is DEV_BSIZE aligned) when they should have
been using buf->b_bcount.  These have been fixed.  getblk() now clears
B_DONE on return because the rest of the system is so bad about
dealing with B_DONE.

Major fixes to NFS/TCP have been made.  A server-side bug could cause
requests to be lost by the server due to nfs_realign() overwriting
other rpc's in the same TCP mbuf chain.  The server's kernel must be
recompiled to get the benefit of the fixes.

Submitted by:	Matthew Dillon <dillon@apollo.backplane.com>
1999-05-02 23:57:16 +00:00
dt
ba8c622703 s/static foo_devsw_installed = 0;/static int foo_devsw_installed;/.
(Edited automatically)
1999-04-28 10:54:24 +00:00
phk
16e3fbd2c1 Suser() simplification:
1:
  s/suser/suser_xxx/

2:
  Add new function: suser(struct proc *), prototyped in <sys/proc.h>.

3:
  s/suser_xxx(\([a-zA-Z0-9_]*\)->p_ucred, \&\1->p_acflag)/suser(\1)/

The remaining suser_xxx() calls will be scrutinized and dealt with
later.

There may be some unneeded #include <sys/cred.h>, but they are left
as an exercise for Bruce.

More changes to the suser() API will come along with the "jail" code.
1999-04-27 11:18:52 +00:00
dt
9efe75f7a6 Make pmap_collect() an official pmap interface. 1999-04-23 20:29:58 +00:00
peter
a74bdeb7d1 unifdef -DVM_STACK - it's been on for a while for x86 and was checked
and appeared to be working for the Alpha some time ago.
1999-04-19 14:14:14 +00:00
peter
dfdcc62332 Move the declaration of faultin() from the vm headers to proc.h, since
it is now referenced from a macro there (PHOLD()).
1999-04-13 19:17:15 +00:00
eivind
29478a96d3 Staticize 1999-04-11 02:16:27 +00:00
dt
80578d3e92 Convert usage of vm_page_bits() to the new convention ("Inputs are required
to range within a page").
1999-04-10 20:52:11 +00:00
eivind
b790d856f9 Lock vnode correctly for VOP_OPEN.
Discussed with:	alc, dillon
1999-04-10 17:54:43 +00:00
peter
100f4abd46 Don't forcibly kill processes that are locked in-core via PHOLD - it was
just checking P_NOSWAP before.
1999-04-06 03:14:56 +00:00
peter
bc820937dc Only use p->p_lock (manage by PHOLD()/PRELE()) - P_NOSWAP/P_PHYSIO is no
longer set.
1999-04-06 03:11:34 +00:00
julian
0ed09d2ad5 Catch a case spotted by Tor where files mmapped could leave garbage in the
unallocated parts of the last page when the file ended on a frag
but not a page boundary.
Delimitted by tags PRE_MATT_MMAP_EOF and POST_MATT_MMAP_EOF,
in files alpha/alpha/pmap.c i386/i386/pmap.c nfs/nfs_bio.c vm/pmap.h
    vm/vm_page.c vm/vm_page.h vm/vnode_pager.c miscfs/specfs/spec_vnops.c
    ufs/ufs/ufs_readwrite.c kern/vfs_bio.c

Submitted by: Matt Dillon <dillon@freebsd.org>
Reviewed by: Alan Cox <alc@freebsd.org>
1999-04-05 19:38:30 +00:00
alc
ad1fbba2a9 Two changes to vm_map_delete:
1. Don't bother checking object->ref_count == 1 in order to set
OBJ_ONEMAPPING.  It's a waste of time.  If object->ref_count == 1,
vm_map_entry_delete will "run-down" the object and its pages.

2. If object->ref_count == 1, ignore OBJ_ONEMAPPING.  Wait for
vm_map_entry_delete to "run-down" the object and its pages.
Otherwise, we're calling two different procedures to delete
the object's pages.

Note: "vmstat -s" will once again show a non-zero value
for "pages freed by exiting processes".
1999-04-04 07:11:02 +00:00
alc
a976359db5 Mainly, eliminate the comments about share maps. (We don't have share maps
any more.)  Also, eliminate an incorrect comment that says that we don't
coalesce vm_map_entry's.  (We do.)
1999-03-27 23:46:04 +00:00
eivind
fdc0436c85 Correct a comment. 1999-03-27 02:39:01 +00:00
alc
9b15de3986 Two changes:
Remove more (redundant) map timestamp increments from properly
synchronized routines.  (Changed: vm_map_entry_link, vm_map_entry_unlink,
and vm_map_pageable.)

Micro-optimize vm_map_entry_link and vm_map_entry_unlink, eliminating
unnecessary dereferences.  At the same time, converted them from macros
to inline functions.
1999-03-21 23:37:00 +00:00
alc
4bdf1d66da Construct the free queue(s) in descending order (by physical
address) so that the first 16MB of physical memory is allocated
last rather than first.  On large-memory machines, this avoids
the exhaustion of low physical memory before isa_dmainit has run.
1999-03-19 05:21:03 +00:00
alc
57d921a394 Correct a problem in kmem_malloc: A kmem_malloc allowing "wait" may
block (VM_WAIT) holding the map lock.  This is bad.  For example, a subsequent
kmem_malloc by an interrupt handler on the same map may find the lock held
and panic in the lockmgr.
1999-03-16 07:39:07 +00:00
alc
8baf85480b Two changes:
In general, vm_map_simplify_entry should be performed INSIDE
the loop that traverses the map, not outside.  (Changed:
vm_map_inherit, vm_map_pageable.)

vm_fault_unwire doesn't acquire the map lock (or block holding
it).  Thus, vm_map_set/clear_recursive shouldn't be called.
(Changed: vm_map_user_pageable, vm_map_pageable.)
1999-03-15 06:24:52 +00:00
julian
ec27b516c8 Fix breakage in last commit
Submitted by: Brian Feldman <green@unixhelp.org>
1999-03-15 05:09:48 +00:00
julian
8ad9ed65a4 A bit of a hack, but allows the vn device to be a module again.
Submitted by: Matt Dillon <dillon@freebsd.org>
1999-03-14 20:40:15 +00:00
julian
0c3f3973d2 Submitted by: Matt Dillon <dillon@freebsd.org>
The old VN device broke in -4.x when the definition of B_PAGING
changed. This patch fixes this plus implements additional capabilities.
The new VN device can be backed by a file ( as per normal ), or it can
be directly backed by swap.

Due to dependencies in VM include files  (on opt_xxx options) the new
vn device cannot be a module yet. This will be fixed in a later commit.
This commit delimitted by tags {PRE,POST}_MATT_VNDEV
1999-03-14 09:20:01 +00:00
alc
aa8bb4e29a Correct two optimization errors in vm_object_page_remove:
1. The size of vm_object::memq is vm_object::resident_page_count,
not vm_object::size.

2. The "size > 4" test sometimes results in the traversal of a ~1000 page
memq in order to locate ~10 pages.
1999-03-14 06:36:00 +00:00
alc
2d75a5cc4c Remove vm_page_frees from kmem_malloc that are performed
by vm_map_delete/vm_object_page_remove anyway.
1999-03-12 08:05:49 +00:00
julian
4726cfcda9 Stop the mfs from trying to swap out crucial bits of the mfs
as this can lead to deadlock.
Submitted by: Mat dillon <dillon@freebsd.org>
1999-03-12 00:44:03 +00:00
alc
143686d0c8 Remove (redundant) map timestamp increments from some properly
synchronized routines.
1999-03-09 08:00:17 +00:00
alc
118d31f1dd Remove an unused variable from vmspace_fork. 1999-03-08 03:53:07 +00:00
alc
65b8ae0944 Change vm_map_growstack to acquire and hold a read lock (instead of a write
lock) until it actually needs to modify the vm_map.

Note: it is legal to modify vm_map::hint without holding a write lock.

Submitted by:	"Richard Seaman, Jr." <dick@tar.com> with minor changes
		by myself.
1999-03-07 21:25:42 +00:00
alc
4e7ebf3dd0 Upgrading a map's lock to exclusive status should increment
the map's timestamp.  In general, whenever an exclusive lock is
acquired the timestamp should be incremented.
1999-03-06 07:11:33 +00:00
alc
9e3d479b9d To avoid a conflict for the vm_map's lock with vm_fault, release
the read lock around the subyte operations in mincore.  After the lock is
reacquired, use the map's timestamp to determine if we need to restart
the scan.
1999-03-02 22:55:02 +00:00
alc
5149bf6666 Remove the last of the share map code: struct vm_map::is_main_map.
Reviewed by:	Matthew Dillon <dillon@apollo.backplane.com>
1999-03-02 05:43:18 +00:00
alc
4d728cf4b3 mincore doesn't modify the vm_map. Therefore, it doesn't require
an exclusive lock.  A read lock will suffice.
1999-03-01 20:42:16 +00:00
alc
440397c816 Reviewed by: "John S. Dyson" <dyson@iquest.net>
Submitted by:	Matthew Dillon <dillon@apollo.backplane.com>
To prevent a deadlock, if we are extremely low on memory, force synchronous
operation by the VOP_PUTPAGES in vnode_pager_putpages.
1999-02-27 23:39:28 +00:00
alc
11ba367bdf Reviewed by: Matthew Dillon <dillon@apollo.backplane.com>
Corrected the computation of cnt.v_ozfod in vm_fault: vm_fault
was counting the number of unoptimized rather than optimized
zero-fill faults.
1999-02-25 06:00:52 +00:00
dillon
25079ce905 Comment swstrategy() routine. 1999-02-25 05:37:18 +00:00
dillon
3ad14cacdc Remove unnecessary page protects on map_split and collapse operations.
Fix bug where an object's OBJ_WRITEABLE/OBJ_MIGHTBEDIRTY flags do
    not get set under certain circumstances ( page rename case ).

Reviewed by:	Alan Cox <alc@cs.rice.edu>, John Dyson
1999-02-24 21:26:26 +00:00
dillon
9f7c64c6ce Removed ENOMEM error on swap_pager_full condition which ignored the
availability of physical memory.  As per original bug report by
    Bruce.

Reviewed by:	Alan Cox <alc@cs.rice.edu>
1999-02-22 08:42:16 +00:00
dillon
fa06d8f968 Remove conditional sysctl's
Leave swap_async_max sysctl intact, remove swap_cluster_max sysctl.

Reviewed by:	     Alan Cox <alc@cs.rice.edu>
1999-02-21 08:34:15 +00:00
dillon
338a9c530d Reviewed by: Alan Cox <alc@cs.rice.edu>
Fix problem w/ low-swap/low-memory handling as reported by Bruce Evans.
1999-02-21 08:30:49 +00:00
luoqi
e0559c2622 Eliminate a possible numerical overflow. 1999-02-19 19:14:48 +00:00
luoqi
082d37c1ac Hide access to vmspace:vm_pmap with inline function vmspace_pmap(). This
is the preparation step for moving pmap storage out of vmspace proper.

Reviewed by:	Alan Cox	<alc@cs.rice.edu>
		Matthew Dillion	<dillon@apollo.backplane.com>
1999-02-19 14:25:37 +00:00
dillon
c950305edd Submitted by: Alan Cox <alc@cs.rice.edu>
Remove remaining share map garbage from vm_map_lookup() and clean out
    old #if 0 stuff.
1999-02-19 03:11:37 +00:00
dillon
03d8f2679e Limit number of simultanious asynchronous swap pager I/Os that can
be in progress at any given moment.

    Add two swap tuneables to sysctl:

	vm.swap_async_max: 4
	vm.swap_cluster_max: 16

    Recommended values are a cluster size of 8 or 16 pages.  async_max is
    about right for 1-4 swap devices.  Reduce to 2 if swap is eating too much
    bandwidth, or even 1 if swap is both eating too much bandwidth and sitting
    on a slow network (10BaseT).

    The defaults work well across a broad range of configurations and should
    normally be left alone.
1999-02-18 19:57:33 +00:00
dillon
e819b6214d Submitted by: Luoqi Chen <luoqi@watermarkgroup.com>
Unlock vnode before messing with map to avoid deadlock between map and
    vnode ( e.g. with exec_map and underlying program binary vnode ).  Solves
    a deadlock that most often occurs during a large -j# buildworld reported
    by three people.
1999-02-17 09:08:29 +00:00
dillon
1fbfe938f2 Minor reorganization of vm_page_alloc(). No functional changes have
been made but the code has been reorganized and documented to make
    it more readable, reduce the size of the code, and optimize the branch
    path caching capabilities that most modern processors have.
1999-02-15 06:52:14 +00:00
dillon
0a57647424 Fix a bug in the new madvise() code that would possibly (improperly)
free swap space out from under a busy page.  This is not legal because
    the swap may be reallocated and I/O issued while I/O is still in
    progress on the same swap page from the madvise()'d object.  This bug
    could only occur under extreme paging conditions but might not cause
    an error until much later.  As a side-benefit, madvise() is now even
    smaller.
1999-02-15 02:03:40 +00:00
dillon
aadfe1d833 Minor optimization to madvise() MADV_FREE to make page as freeable as
possible without actually unmapping it from the process.

    As of now, I declare madvise() on OBJT_DEFAULT/OBJT_SWAP objects to be
    'working and complete'.
1999-02-12 20:42:19 +00:00
dillon
e38d19126b Fix non-fatal bug in vm_map_insert() which improperly cleared
OBJ_ONEMAPPING in the case where an object is extended by an
    additional vm_map_entry must be allocated.

    In vm_object_madvise(), remove calll to vm_page_cache() in MADV_FREE
    case in order to avoid a page fault on page reuse.  However, we still
    mark the page as clean and destroy any swap backing store.

Submitted by:	Alan Cox <alc@cs.rice.edu>
1999-02-12 09:51:43 +00:00
dillon
76e195c050 Addendum to vm_map coalesce optimization. Also, this was backed-out
because there was a concensus on current in regards to leaving bss r+w+x
    instead of r+w.  This is in order to maintain reasonable compatibility
    with existing JIT compilers (e.g. kaffe) and possibly other programs.
1999-02-09 01:39:29 +00:00
dillon
139adb1b8f Revamp vm_object_[q]collapse(). Despite the complexity of this patch,
no major operational changes were made.  The three core object->memq loops
    were moved into a single inline procedure and various operational
    characteristics of the collapse function were documented.
1999-02-08 19:00:15 +00:00
dillon
8bdc42c0a1 General cleanup. Remove #if 0's and remove useless register qualifiers. 1999-02-08 05:15:54 +00:00
dillon
b7a0b99c31 Rip out PQ_ZERO queue. PQ_ZERO functionality is now combined in with
PQ_FREE.  There is little operational difference other then the kernel
    being a few kilobytes smaller and the code being more readable.

    * vm_page_select_free() has been *greatly* simplified.
    * The PQ_ZERO page queue and supporting structures have been removed
    * vm_page_zero_idle() revamped (see below)

    PG_ZERO setting and clearing has been migrated from vm_page_alloc()
    to vm_page_free[_zero]() and will eventually be guarenteed to remain
    tracked throughout a page's life ( if it isn't already ).

    When a page is freed, PG_ZERO pages are appended to the appropriate
    tailq in the PQ_FREE queue while non-PG_ZERO pages are prepended.
    When locating a new free page, PG_ZERO selection operates from within
    vm_page_list_find() ( get page from end of queue instead of beginning
    of queue ) and then only occurs in the nominal critical path case.  If
    the nominal case misses, both normal and zero-page allocation devolves
    into the same _vm_page_list_find() select code without any specific
    zero-page optimizations.

    Additionally, vm_page_zero_idle() has been revamped.  Hysteresis has been
    added and zero-page tracking adjusted to conform with the other changes.
    Currently hysteresis is set at 1/3 (lo) and 1/2 (hi) the number of free
    pages.  We may wish to increase both parameters as time permits.  The
    hysteresis is designed to avoid silly zeroing in borderline allocation/free
    situations.
1999-02-08 00:37:36 +00:00
dillon
3e732af5ea Backed out vm_map coalesce optimization - it resulted in 22% more page
faults for reasons unknown ( under investigation ).
    /usr/bin/time -l make in /usr/src/bin went from 67000 faults to 90000
    faults.
1999-02-08 00:27:56 +00:00
dillon
98732ec693 Remove MAP_ENTRY_IS_A_MAP 'share' maps. These maps were once used to
attempt to optimize forks but were essentially given-up on due to
    problems and replaced with an explicit dup of the vm_map_entry structure.
    Prior to the removal, they were entirely unused.
1999-02-07 21:48:23 +00:00
dillon
08bf7e9a93 Remove L1 cache coloring optimization ( leave L2 cache coloring opt ).
Rewrite vm_page_list_find() and vm_page_select_free() - make inline out
    of nominal case.
1999-02-07 20:45:15 +00:00
dillon
7a0029ee87 When shadowing objects, adjust the page coloring of the shadowing object
such that pages in the combined/shadowed object are consistantly
    colored.

Submitted by:	"John S. Dyson" <dyson@iquest.net>
1999-02-07 08:44:53 +00:00
dillon
638895c103 Add hysteresis to the 'swap_pager_getswapspace; failed' console message.
Also widen the hysteresis levels a little ( these really should be
    dynamically configured ).
1999-02-06 07:22:21 +00:00
dillon
eb4dbd2f37 The elf loader sets the permissions on bss to VM_PROT_READ|VM_PROT_WRITE
rather then VM_PROT_ALL.  obreak, on the otherhand, uses VM_PROT_ALL.
    This prevents vm_map_insert() from being able to coalesce the heap and
    creates an extra map entry.  Since current architectures ignore
    VM_PROT_EXECUTE anyway, and since not having VM_PROT_EXECUTE on data/bss
    may provide protection in the future, obreak now uses read+write rather
    then all (r+w+x).

    This is an optimization, not a bug fix.

Submitted by:	Alan Cox <alc@cs.rice.edu>
1999-02-05 07:49:29 +00:00
dillon
1648ae3fae Fix bug in a KASSERT I introduced in vm_page_qcollapse() rev 1.139.
Since paging is in progress, page scan in vm_page_qcollapse() must be
    protected at atleast splbio() to prevent pages from being ripped out from
    under the scan.
1999-02-04 17:47:52 +00:00
dillon
fdc78db606 Submitted by: Alan Cox
The vm_map_insert()/vm_object_coalesce() optimization has been extended
    to include OBJT_SWAP objects as well as OBJT_DEFAULT objects.  This is
    possible because it costs nothing to extend an OBJT_SWAP object with
    the new swapper.  We can't do this with the old swapper.  The old swapper
    used a linear array that would have had to have been reallocated, costing
    time as well as a potential low-memory deadlock.
1999-02-03 01:57:17 +00:00
dillon
56683bbe5a This patch eliminates a pointless test from appearing twice
in vm_map_simplify_entry.  Basically, once you've verified that
    the objects in the adjacent vm_map_entry's are the same, either
    NULL or the same vm_object, there's no point in checking that the
    objects have the same behavior.

Obtained from:  Alan Cox <alc@cs.rice.edu>
1999-02-01 08:49:30 +00:00
julian
df7c58af81 Submitted by: Alan Cox <alc@cs.rice.edu>
Checked by: "Richard Seaman, Jr." <dick@tar.com>
Fix the following problem:
As the code stands now, growing any stack, and not just the process's
main stack, modifies vm->vm_ssize.  This is inconsistent with the code
earlier in the same procedure.
1999-01-31 14:09:25 +00:00
dillon
975fba8a24 Fix warnings in preparation for adding -Wall -Wcast-qual to the
kernel compile
1999-01-28 00:57:57 +00:00
dillon
ca8ef4ff13 Remove unintended trigraph sequences in comments for -Wall 1999-01-27 18:19:53 +00:00
julian
4b7738dba1 Mostly remove the VM_STACK OPTION.
This changes the definitions of a few items so that structures are the
same whether or not the option itself is enabled. This allows
people to enable and disable the option without recompilng the world.

As the author says:

|I ran into a problem pulling out the VM_STACK option.  I was aware of this
|when I first did the work, but then forgot about it.  The VM_STACK stuff
|has some code changes in the i386 branch.  There need to be corresponding
|changes in the alpha branch before it can come out completely.

what is done:
|
|1) Pull the VM_STACK option out of the header files it appears in.  This
|really shouldn't affect anything that executes with or without the rest
|of the VM_STACK patches.  The vm_map_entry will then always have one
|extra element (avail_ssize).  It just won't be used if the VM_STACK
|option is not turned on.
|
|I've also pulled the option out of vm_map.c.  This shouldn't harm anything,
|since the routines that are enabled as a result are not called unless
|the VM_STACK option is enabled elsewhere.
|
|2) Add what appears to be appropriate code the the alpha branch, still
|protected behind the VM_STACK switch.  I don't have an alpha machine,
|so we would need to get some testers with alpha machines to try it out.
|
|Once there is some testing, we can consider making the change permanent
|for both i386 and alpha.
|
[..]
|
|Once the alpha code is adequately tested, we can pull VM_STACK out
|everywhere.
|

Submitted by:	"Richard Seaman, Jr." <dick@tar.com>
1999-01-26 02:49:52 +00:00
julian
05a2232887 Enable Linux threads support by default.
This takes the conditionals out of the code that has been tested by
various people for a while.
ps and friends (libkvm) will need a recompile as some proc structure
changes are made.

Submitted by:	"Richard Seaman, Jr." <dick@tar.com>
1999-01-26 02:38:12 +00:00
dillon
e06ca031d7 Undo last commit - not a bug, just duplicate code. PG_MAPPED and
PG_WRITEABLE are already cleared by vm_page_protect().
1999-01-24 07:06:52 +00:00
dillon
a4c067a459 Change all manual settings of vm_page_t->dirty = VM_PAGE_BITS_ALL
to use the vm_page_dirty() inline.

    The inline can thus do sanity checks ( or not ) over all cases.
1999-01-24 06:04:52 +00:00
dillon
facf2fdb5a vm_map_split() used to dirty the page manually after calling
vm_page_rename(), but never pulled the page off PQ_CACHE if it was on
    PQ_CACHE.  Dirty pages in PQ_CACHE are not allowed and a KASSERT was
    added in -4.x to test for this... and got hit.

    In -4.x, vm_page_rename() automatically dirties the page.  This commit
    also has it deal with the PQ_CACHE case, deactivating the page in that
    case.
1999-01-24 06:00:31 +00:00
dillon
f4215e2355 Add vm_page_dirty() inline with PQ_CACHE sanity check 1999-01-24 05:57:50 +00:00
dillon
9ed8ff237b vm_pager_put_pages() is passed an rcval array to hold per-page return
values.  The 'int' return value for the procedure was never used and
    not well defined in any case when there are mixed errors on pages, so
    it has been removed.  vm_pager_put_pages() and associated vm_pager
    functions now return void.
1999-01-24 02:32:15 +00:00
dillon
90b1bd7723 Clear PG_MAPPED as well as PG_WRITEABLE when a page is moved to the
cache.
1999-01-24 02:29:26 +00:00
dillon
555a2c90ad Added warning printf ( needs INVARIANTS ) when busy cache page is found
while trying to free memory.
1999-01-24 01:33:22 +00:00
dillon
5ba2ffcf3a It is possible for a page in the cache to be busy. vm_pageout.c was not
checking for this condition while it tried to free cache pages.  Fixed.
1999-01-24 01:06:31 +00:00
dillon
84015ba9cb Add invariants to vm_page_busy() and vm_page_wakeup() to check for
PG_BUSY stupidity.
1999-01-24 01:05:15 +00:00
dillon
e854781bc8 Clear PG_WRITEABLE in vm_page_cache(). This may or may not be a bug,
but the bit should definitely be cleared.
1999-01-24 01:04:04 +00:00
dillon
44c2f47425 Depreciate vm_object_pmap_copy() - nobody uses it. Everyone uses
vm_object_pmap_copt_1() now, apparently.
1999-01-24 01:01:38 +00:00
dillon
e554adda08 Get rid of unused old_m in vm_fault. Add INVARIANTS to test whether
page is still busy after all the hell vm_fault goes through.. it is
    supposed to be, and printf() if it isn't.  don't panic, though.
1999-01-24 00:55:04 +00:00
dillon
3a81644d73 Reenable John Dyson's low-memory VM_WAIT code for page reactivations out
of PQ_CACHE.  Add comments explaining what it accomplishes and its
    limitations.
1999-01-23 06:00:27 +00:00
dillon
be14f0e426 Mainly changes to support the new swapper. The big adjustment is that
swap blocks are now in PAGE_SIZE'd increments instead of DEV_BSIZE'd
    increments.  We still convert to DEV_BSIZE'd increments for the
    backing store I/O, but everything else is in PAGE_SIZE increments.
1999-01-21 10:17:12 +00:00
dillon
67eada315f Move many of the vm_pager_*() functions from vm_pager.c to inlines in
vm_pager.h
1999-01-21 10:15:47 +00:00
dillon
c11718f005 Move many of the vm_pager_*() functions from vm_pager.c to inlines in
vm_pager.h

     Added argument to getpbuf() and relpbuf() to allow each subsystem to
     specify a different hard limit on the number of simultanious physical
     bufferes that said subsystem may allocate.  Without this feature, one
     subsystem ( e.g. the vfs clustering code ) could hog *ALL* the pbufs,
     causing a deadlock in the pager in a low memory situation.

     Same for trypbuf().
1999-01-21 10:15:24 +00:00
dillon
58521d75ed Reorganized some of the low memory testing code to make it more useful.
Removed call to vm_object_collapse(), which can block.  This was being
    called without the pageout code holding any sort of reference on the
    vm_object or vm_page_t structures being manipulated.  Since this code
    can block, it was possible for other kernel code to shred the state
    the pageout code was assuming remained intact.

    Fixed potential blocking condition in vm_pageout_page_free() ( which
    could cause a deadlock in a low-memory situation ).

    Currently there is a hack in-place to deal with clean filesystem meta-data
    polluting the inactive page queue.  John doesn't like the hack, and neither
    do I.

    Revamped and commented a portion of the pageout loop.

    Added protection against potential memory deadlocks with OBJT_VNODE
    when using VOP_ISLOCKED().  The problem is that vp->v_data can be NULL
    which causes VOP_ISLOCKED() to return a less informed answer.

    remove vm_pager_sync() -- none of the pagers use it any more ( the old
    swapper used to.  The new one does not ).
1999-01-21 10:12:54 +00:00
dillon
fb433b53df The TAILQ hashq has been turned into a singly-linked=list link,
reducing the size of vm_page_t.

    SWAPBLK_NONE and SWAPBLK_MASK are defined here.  These actually are
    more generalized then their names imply, but their placement is somewhat
    of a legacy issue from a prior test version of this code that put
    the swapblk in the vm_page_t structure.  That test code was eventually
    thrown away.  The legacy remains.

    Added vm_page_flash() inline.  Similar to vm_page_wakeup() except that
    it does not clear PG_BUSY ( one assumes that PG_BUSY is already clear ).
    Used by a number of routines to wakeup waiters.

    Collapsed some of the code in inline calls to make other inline calls.
    GCC will optimize this well and it reduces duplication.

    vm_page_free() and vm_page_free_zero() inlines added to convert to
    the proper vm_page_free_toq() call.

    vm_page_sleep_busy() inline added, replacing vm_page_sleep() ( which has
    been removed ).  This implements a much more optimizable page-waiting
    function.
1999-01-21 10:06:24 +00:00
dillon
1ceb9c9b9e The hash table used to be a table of doubly-link list headers ( two
pointers per entry ).  The table has been changed to a singly linked
    list of vm_page_t pointers.  The table has been doubled in size, but
    the entries only take half the space so a net-zero change in memory use.

    The hash function has been changed, hopefully for the better.  The
    combination of the larger hash table size of changed function should
    keep the chain length down to a reasonable number (0-3, average 1).

    vm_object->page_hint has been removed.  This 'optimization' was not
    only never needed, but costs as much as a hash chain link to implement.
    While having page_hint in vm_object might result in better locality
    of reference, the cost is not worth the space in vm_object or the
    extra instructions in my view.

    vm_page_alloc*() functions have been inlined and call a generalized
    non-inlined vm_page_alloc_toq() which combines the standard alloc
    and zero-page alloc functions together, reducing code size and the L1
    cache footprint.  Some reordering has been done... not much.  The
    delinking code should be faster ( because unlinking a doubly-linked list
    requires four memory ops and unlinking a singly linked list only requires
    two ), and we get a hash consistancy check for free.

    vm_page_rename() now automatically sets the page's dirty bits.

    vm_page_alloc() does not try to manually inline freeing a cache page.
    Instead, it now properly calls vm_page_free(m) ... vm_page_free() is
    really too complex to manually inline.

    vm_await(), supporting asleep(), has been added.
1999-01-21 10:01:49 +00:00
dillon
04052d89f8 The vm_object structure is now somewhat smaller due to the removal
of most of the swap-pager-specific fields, the removal of the id,
    and the removal of paging_offset.

    A new inline, vm_object_pip_wakeupn() has been added to subtract an
    arbitrary number n from the paging_in_progress count and then wakeup
    waiters as necessary.  n may be 0, resulting in a 'flash'.
1999-01-21 09:51:21 +00:00
dillon
3f5f4e54ca object->id was badly implemented. It has simply been removed.
object->paging_offset has been removed - it was used to optimize a
    single OBJT_SWAP collapse case yet introduced massive confusion throughout
    vm_object.c.  The optimization was inconsequential except for the
    claim that it didn't have to allocate any memory.  The optimization
    has been removed.

    madvise() has been fixed.  The old madvise() could be made to operate
    on shared objects which is a big no-no.  The new one is much more careful
    in what it modifies.  MADV_FREE was totally broken and has now been fixed.

    vm_page_rename() now automatically dirties a page, so explicit dirtying
    of the page prior to calling vm_page_rename() has been removed.
1999-01-21 09:46:55 +00:00
dillon
5e7ceee4a0 Objects associated with raw devices are no longer counted in the VM stats
total because they may contain absurd numbers ( like the size of all
    of physical memory if you mmap() /dev/mem ).
1999-01-21 09:41:52 +00:00
dillon
bb85a38eff General cleanup related to the new pager. We no longer have to worry
about conversions of objects to OBJT_SWAP, it is done automatically
    now.

    Replaced manually inserted code with inline calls for busy waiting on
    pages, which also incidently fixes a potential PG_BUSY race due to
    the code not running at splvm().

    vm_objects no longer have a paging_offset field ( see vm/vm_object.c )
1999-01-21 09:40:48 +00:00
dillon
8716ba4543 Potential bug fix, do not just clear PG_BUSY... call vm_page_wakeup()
instead to properly handle any waiters.

    Added comments, added support for M_ASLEEP.  Generally treat M_ flags
    as flags instead of constants to compare against.
1999-01-21 09:38:20 +00:00
dillon
f89474ca7b Removed low-memory blockages at fork. This is the wrong place to put
this sort of test.  We need to fix the low-memory handling in general.
1999-01-21 09:36:23 +00:00
dillon
fae5210519 Mainly cleanup. Removed some inappropriate low-memory handling code
and added lots of comments.  Add tie-in to vm_pager ( and thus the
    new swapper ) to deallocate backing swap for dirtied pages on the fly.
1999-01-21 09:35:38 +00:00
dillon
e3c11331d3 The default_pager's interaction with the swap_pager has been reorganized,
and the swap_pager has been completely replaced.

    The new swap pager uses the new blist radix-tree based bitmap allocator
    for low level swap allocation and deallocation.   The new allocator
    is effectively O(5) while the old one was O(N), and the new allocator
    allocates all required memory at init time rather then at allocate
    memory on the fly at run time.

    Swap metadata is allocated in clusters and stored in a hash table,
    eliminating linearly allocated structures.

    Many, many features have been rewritten or added.  Swap space is now
    reallocated on the fly providing a poor-mans auto defragmentation of
    swap space.  Swap space that is no longer needed is freed on a timely
    basis so no garbage collection is necessary.

    Swap I/O is marked B_ASYNC and NFS has been fixed to do the right
    thing with it, so NFS-based paging now has around 10x the performance
    as it did before ( previously NFS enforced synchronous I/O for paging ).
1999-01-21 09:33:07 +00:00
dillon
df24433bbe This is a rather large commit that encompasses the new swapper,
changes to the VM system to support the new swapper, VM bug
    fixes, several VM optimizations, and some additional revamping of the
    VM code.  The specific bug fixes will be documented with additional
    forced commits.  This commit is somewhat rough in regards to code
    cleanup issues.

Reviewed by:	"John S. Dyson" <root@dyson.iquest.net>, "David Greenman" <dg@root.com>
1999-01-21 08:29:12 +00:00
eivind
89e1199534 KNFize, by bde. 1999-01-10 01:58:29 +00:00
eivind
a8dc66f457 Split DIAGNOSTIC -> DIAGNOSTIC, INVARIANTS, and INVARIANT_SUPPORT as
discussed on -hackers.

Introduce 'KASSERT(assertion, ("panic message", args))' for simple
check + panic.

Reviewed by:	msmith
1999-01-08 17:31:30 +00:00
julian
a7b385889e Changes to the LINUX_THREADS support to only allocate extra memory for
shared signal handling when there is shared signal handling being
used.

This removes the main objection to making the shared signal handling
a standard ability in rfork() and friends and 'unconditionalising'
this code. (i.e. the allocation of an extra 328 bytes per process).

Signal handling information remains in the U area until such a time as
it's reference count would be incremented to > 1. At that point a new
struct is malloc'd and maintained in KVM so that it can be shared between
the processes (threads) using it.

A function to check the reference count and move the struct back to the U
area when it drops back to 1 is also supplied. Signal information is
therefore now swapable for all processes that are not sharing that
information with other processes. THis should addres the concerns raised
by Garrett and others.

Submitted by:	"Richard Seaman, Jr." <dick@tar.com>
1999-01-07 21:23:50 +00:00
julian
4666ac5027 Add (but don't activate) code for a special VM option to make
downward growing stacks more general.
Add (but don't activate) code to use the new stack facility
when running threads, (specifically the linux threads support).
This allows people to use both linux compiled linuxthreads, and also the
native FreeBSD linux-threads port.

The code is conditional on VM_STACK. Not using this will
produce the old heavily tested system.

Submitted by: Richard Seaman <dick@tar.com>
1999-01-06 23:05:42 +00:00
bde
734d13314e Ifdefed conditionally used simplock variables. 1999-01-02 11:34:57 +00:00
dt
065be55870 Don't free swap in swap_pager_getpages(): this code probably cause the
"dying daemons" problem. (I thought this code was introduced in rev.1.80,
but it just relaxed the condition.)

Also, kill related "suggest more swap space" warning (also introduced in
1.80). It was confusing, to say the least...

Requested by:		msmith
Not objected by:	dg
1998-12-29 22:53:51 +00:00
dillon
52a9d9d6e6 Update comments to routines in vm_page.c, most especially whether a
routine can block or not as part of a general effort to carefully
    document blocking/non-blocking calls in the kernel.
1998-12-23 01:52:47 +00:00
julian
d718e5c06d Fix two bogons created by 'patch(1)' in my last commit. 1998-12-19 08:23:31 +00:00
julian
61490236bc Reviewed by: Luoqi Chen, Jordan Hubbard
Submitted by:	 "Richard Seaman, Jr." <lists@tar.com>
Obtained from:	linux :-)

Code to allow Linux Threads to run under FreeBSD.

By default not enabled
This code is dependent on the conditional
COMPAT_LINUX_THREADS (suggested by Garret)
This is not yet a 'real' option but will be within some number of hours.
1998-12-19 02:55:34 +00:00
dt
b35cc94e30 Don't disable mmap with large file offset. 1998-12-09 20:22:21 +00:00
archie
60d13c7a9d The "easy" fixes for compiling the kernel -Wunused: remove unreferenced static
and local variables, goto labels, and functions declared but not defined.
1998-12-07 21:58:50 +00:00
archie
982e80577d Examine all occurrences of sprintf(), strcat(), and str[n]cpy()
for possible buffer overflow problems. Replaced most sprintf()'s
with snprintf(); for others cases, added terminating NUL bytes where
appropriate, replaced constants like "16" with sizeof(), etc.

These changes include several bug fixes, but most changes are for
maintainability's sake. Any instance where it wasn't "immediately
obvious" that a buffer overflow could not occur was made safer.

Reviewed by:	Bruce Evans <bde@zeta.org.au>
Reviewed by:	Matthew Dillon <dillon@apollo.backplane.com>
Reviewed by:	Mike Spengler <mks@networkcs.com>
1998-12-04 22:54:57 +00:00
rvb
8d21664d46 In vnode_pager_input_old, set auio.uio_procp = curproc
vs auio.uio_procp = (struct proc *) 0
1998-12-04 18:39:44 +00:00
dg
3d709a2bc7 Add missing splvm protection around unqueue call. Without this, the page
queues would eventually get corrupted.
1998-11-25 07:40:49 +00:00
bde
9094f7c05e Fixed a null pointer panic in spc_free(). swap_pager_putpages()
almost always causes this panic for the curproc != pageproc case.
This case apparently doesn't happen in normal operation, but it
happens when vm_page_alloc_contig() is called when there is a memory
hogging application that hasn't already been paged out.

PR:		8632
Reviewed by:	info@opensound.com (Dev Mazumdar), dg
Broken in:	rev.1.89 (1998/02/23)
1998-11-19 06:20:42 +00:00
dg
ba58877007 Closed a small race condition between wiring/unwiring pages that involved
the page's wire_count.
1998-11-11 15:07:57 +00:00
peter
73192d8050 add #include <sys/kernel.h> where it's needed by MALLOC_DEFINE() 1998-11-10 09:16:29 +00:00
dfr
b6d9e06815 * Fix a couple of places in the device pager where an address was
truncated to 32 bits.
* Change the calling convention of the device mmap entry point to
  pass a vm_offset_t instead of an int for the offset allowing
  devices with a larger memory map than (1<<32) to be supported
  on the alpha (/dev/mem is one such).

These changes are required to allow the X server to mmap the various
I/O regions used for device port and memory access on the alpha.
1998-11-08 12:39:07 +00:00
dg
b178f74f12 Implemented zero-copy TCP/IP extensions via sendfile(2) - send a
file to a stream socket. sendfile(2) is similar to implementations in
HP-UX, Linux, and other systems, but the API is more extensive and
addresses many of the complaints that the Apache Group and others have
had with those other implementations. Thanks to Marc Slemko of the
Apache Group for helping me work out the best API for this.
Anyway, this has the "net" result of speeding up sends of files over
TCP/IP sockets by about 10X (that is to say, uses 1/10th of the CPU
cycles) when compared to a traditional read/write loop.
1998-11-05 14:28:26 +00:00
peter
e5c6a4fa5e Add John Dyson's SYSCTL descriptions, and an export of more stats to
a sysctl hierarchy (vm.stats.*).  SYSCTL descriptions are only present
in source, they do not get compiled into the binaries taking up memory.
1998-10-31 17:21:31 +00:00
peter
8ef35acf90 Use TAILQ macros for clean/dirty block list processing. Set b_xflags
rather than abusing the list next pointer with a magic number.
1998-10-31 15:31:29 +00:00
dg
304c46fa2c Fixed wrong comments in and about vm_page_deactivate(). 1998-10-28 13:41:43 +00:00
dg
20b2c33d9a Added a second argument, "activate" to the vm_page_unwire() call so that
the caller can select either inactive or active queue to put the page on.
1998-10-28 13:37:02 +00:00
dg
7850189506 Added needed splvm() protection around object page traversal in
vm_object_terminate().
1998-10-27 13:22:51 +00:00
bde
9fafc47653 Don't follow null bdevsw pointers. The `major(dev) < nblkdev' test rotted
when bdevsw[] became sparse.  We still depend on magic to avoid having to
check that (v_rdev) device numbers in vnodes are not NODEV.

Removed a redundant `major(dev) < nblkdev' test instead of updating it.

Don't follow a garbage bdevsw pointer for attempts to swap on empty
regular files.  This case currently can't happen.  Swapping on regular
files is ifdefed out in swapon() and isn't attempted for empty files
in nfs_mountroot().
1998-10-25 19:24:04 +00:00
phk
13c66194f4 Nitpicking and dusting performed on a train. Removes trivial warnings
about unused variables, labels and other lint.
1998-10-25 17:44:59 +00:00
dg
b898ae170b Oops, revert part of last fix. vm_pager_dealloc() can't be called until
after the pages are removed from the object...so fix the problem by
not printing the diagnostic for wired fictitious pages (which is normal).
1998-10-23 05:43:13 +00:00
dg
599836ef43 Fixed two bugs in recent commit: in vm_object_terminate, vm_pager_dealloc
needs to be called prior to freeing remaining pages in the object so that
the device pager has an opportunity to grab its "fake" pages. Also, in
the case of wired pages, the page must be made busy prior to calling
vm_page_remove. This is a difference from 2.2.x that I overlooked when
I brought these changes forward.
1998-10-23 05:25:49 +00:00
dg
b8a68d9fd9 Make the VM system handle the case where a terminating object contains
legitimately wired pages. Currently we print a diagnostic when this
happens, but this will be removed soon when it will be common for this
to occur with zero-copy TCP/IP buffers.
1998-10-22 02:16:53 +00:00
dg
e51a9e30ea Convert fake page allocs to use the zone allocator, thus eliminating the
private pool management code in here.
1998-10-22 01:45:29 +00:00
dg
268ea3fc13 Set m->object to NULL in dev_pager_getfake(). 1998-10-21 23:06:50 +00:00
dg
92891f8e3d Nuked PG_TABLED flag. Replaced with m->object != NULL. 1998-10-21 14:46:42 +00:00
dg
bbfdc21592 Add a diagnostic printf for freeing a wired page. This will eventually
be turned into a panic, but I want to make sure that all cases of freeing
pages with wire_count==1 (which is/was allowed) have first been fixed.
1998-10-21 11:43:04 +00:00
dg
3defb6d13f Fixed two potentially serious classes of bugs:
1) The vnode pager wasn't properly tracking the file size due to
   "size" being page rounded in some cases and not in others.
   This sometimes resulted in corrupted files. First noticed by
   Terry Lambert.
   Fixed by changing the "size" pager_alloc parameter to be a 64bit
   byte value (as opposed to a 32bit page index) and changing the
   pagers and their callers to deal with this properly.
2) Fixed a bogus type cast in round_page() and trunc_page() that
   caused some 64bit offsets and sizes to be scrambled. Removing
   the cast required adding casts at a few dozen callers.
   There may be problems with other bogus casts in close-by
   macros. A quick check seemed to indicate that those were okay,
   however.
1998-10-13 08:24:45 +00:00
jdp
2846983609 Fix a panic on SMP systems, caused by sleeping while holding a
simple-lock.

The reviewer raises the following caveat: "I believe these changes
open a non-critical race condition when adding memory to the pool
for the zone. I think what will happen is that you could have two
threads that are simultaneously adding additional memory when the
pool runs out. This appears to not be a problem, however, since
the re-aquisition of the lock will protect the list pointers."
The submitter agrees that the race is non-critical, and points out
that it already existed for the non-SMP case.  He suggests that
perhaps a sleep lock (using the lock manager) should be used to
close that race.  This might be worth revisiting after 3.0 is
released.

Reviewed by:	dg (David Greenman)
Submitted by:	tegge (Tor Egge)
1998-10-09 00:24:49 +00:00
jdp
317967a273 Fix a bug in which a page index was used where a byte offset was
expected.  This bug caused builds of Modula-3 to fail in mysterious
ways on SMP kernels.  More precisely, such builds failed on systems
with kern.fast_vfork equal to 0, the default and only supported
value for SMP kernels.

PR:		kern/7468
Submitted by:	tegge (Tor Egge)
1998-10-01 20:46:41 +00:00
abial
121218d024 Make #define NO_SWAPPING a normal kernel config option.
Reviewed by:	jkh
1998-09-29 17:33:59 +00:00
rvb
32f1573bbe John Dyson approved of this solution; make vnode_pager_input_old set m->valid 1998-09-28 23:58:10 +00:00
dg
dc15100c5d Be more selctive about when we clear p->valid.
Submitted by:	John Dyson <toor@dyson.iquest.net>
1998-09-28 02:40:11 +00:00
bde
4d4fe42f59 Removed unused file. 1998-09-20 06:28:10 +00:00
bde
a84a2dedfc Instantiate `nfs_mount_type' in a standard file so that it is present
when nfs is an LKM.  Declare it in a header file.  Don't forget to use
it in non-Lite2 code.  Initialize it to -1 instead of to 0, since 0
will soon be the mount type number for the first vfs loaded.

NetBSD uses strcmp() to avoid this ugly global.
1998-09-05 15:17:34 +00:00
dfr
e2df972eb1 Cosmetic changes to the PAGE_XXX macros to make them consistent with
the other objects in vm.
1998-09-04 08:06:57 +00:00
wollman
c97cc8ee06 Separate wakeup conditions for page I/O count (pg_busy) and lock (PG_BUSY).
This is not sa completely solution to the deadlock, but the additional wakeups
have helped in my observation.

Suggested by: John Dyson
1998-09-01 17:12:19 +00:00
luoqi
920e5f64ff Fix a rounding problem that causes vnode pager to fail to remove the last
partially filled page during a truncation.

PR:		kern/7422
1998-08-25 13:47:37 +00:00
dfr
5fdaeb281d Change various syscalls to use size_t arguments instead of u_int.
Add some overflow checks to read/write (from bde).

Change all modifications to vm_page::flags, vm_page::busy, vm_object::flags
and vm_object::paging_in_progress to use operations which are not
interruptable.

Reviewed by: Bruce Evans <bde@zeta.org.au>
1998-08-24 08:39:39 +00:00
mckay
acd489515b Correct/clarify some comments. 1998-08-22 15:24:09 +00:00
dfr
a1b2079000 Protect all modifications to paging_in_progress with splvm(). 1998-08-13 08:05:13 +00:00
dfr
0864bef679 Protect all modifications to paging_in_progress with splvm(). The i386
managed to avoid corruption of this variable by luck (the compiler used a
memory read-modify-write instruction which wasn't interruptable) but other
architectures cannot.

With this change, I am now able to 'make buildworld' on the alpha (sfx: the
crowd goes wild...)
1998-08-06 08:33:19 +00:00
bde
d7aa77e789 Fixed two spl nesting bugs. They caused (at least) the entire pageout
daemon to run at splvm() forever after swap_pager_putpages() is called
from vm_pageout_scan().

Broken in: rev.1.189 (1998/02/23)
1998-07-28 15:30:01 +00:00
dfr
9c96ae361d Notify pmap when a page is freed on the alpha to allow it to clean up
its emulated modified/referenced bits.
1998-07-26 18:15:20 +00:00
dg
76fd38da9c Improved pager input failure message. 1998-07-22 09:38:04 +00:00
phk
101e6d7c92 There is a comment in vm_param.h which doesn't belong to the
code still left in there.  The macros it describes disapeared some-
time since 4.4BSD lite.

PR:		7246
Reviewed by:	phk
Submitted by:	Stefan Eggers <seggers@semyam.dinoco.de>
1998-07-22 06:21:55 +00:00