Commit Graph

1042 Commits

Author SHA1 Message Date
alfred
a3f0842419 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
jhb
70b98f2d04 - Use a timeout for the tsleep in scheduler() instead of having vmmeter()
wakeup proc0 by hand to enforce the timeout.
- When swapping out a process, keep the process locked via the proc lock
  from the first checks up until we clear PS_INMEM and set PS_SWAPPING in
  swapout().  The swapout() function now must be called with the proc lock
  held and releases it before returning.
- Comment out the code to attempt to lock a process' VM structures before
  swapping out.  It is broken in that it releases the lock after obtaining
  it.  If it does grab the lock, it needs to hand it off to swapout()
  instead of releasing it.  This can be revisisted when the VM is locked
  as this is a valid test to perform.  It also causes a lock order reversal
  for the time being, which is the immediate cause for temporarily
  disabling it.
2001-05-18 00:08:38 +00:00
jhb
c2a1096080 During the code to pick a process to kill when memory is exhausted, keep
the process in question locked as soon as we find it and determine it to
be eligible until we actually kill it.  To avoid deadlock, we don't block
on the process lock but skip any process that is already locked during our
search.
2001-05-17 22:49:03 +00:00
jhb
8865204035 - Use PROC_LOCK_ASSERT instead of a direct mtx_assert.
- Don't hold Giant in the swapper daemon while we walk the list of
  processes looking for a process to swap back in.
- Don't bother grabbing the sched_lock while checking a process' sleep
  time in swapout_procs() to ensure that a process has been idle for at
  least swap_idle_threshold2 before swapping it out.  If we lose the race
  we just let a process stay in memory until the next call of
  swapout_procs().
- Remove some unneeded spl's, sched_lock does all the locking needed in
  this case.
2001-05-15 22:20:44 +00:00
phk
16caeec9b0 Actually biofinish(struct bio *, struct devstat *, int error) is more general
than the bioerror().

Most of this patch is generated by scripts.
2001-05-06 20:00:03 +00:00
markm
b831760191 Putting sys/lockmgr.h in here allows us to depollute userland includes
a bit.
OK'ed by:	bde
2001-05-03 11:33:51 +00:00
markm
bcca5847d5 Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
grog
4b9d9cbaac Revert consequences of changes to mount.h, part 2.
Requested by:	bde
2001-04-29 02:45:39 +00:00
alfred
3abca35872 Address a number of problems with sysctl_vm_zone().
The zone allocator's locks should be leaflocks, meaning that they
should never be held when entering into another subsystem, however
the sysctl grabs the zone global mutex and individual zone mutexes
while holding the lock it calls SYSCTL_OUT which recurses into the
VM subsystem in order to wire user memory to do a safe copy.  This
can block and cause lock order reversals.

To fix this:
  lock zone global.
  get a count of the number of zones.
  unlock global.
  allocate temporary storage.
  format and SYSCTL_OUT the banner.
  lock global.
  traverse list.
    make sure we haven't looped more than the initial count taken
      to avoid overflowing the allocated buffer.
    lock each nodes.
    read values and format into buffer.
    unlock individual node.
  unlock global.
  format and SYSCTL_OUT the rest of the data.
  free storage.
  return.

Other problems included not checking for errors when doing sysctl out
of the column header.  Fixed.

Inconsistant termination of the copied string. Fixed.

Objected to by: des (for not using sbuf)

Since the output is not variable length and I'm actually over
allocating signifigantly and I'd like to get this fixed now, I'll
work on the sbuf convertion at a later date.  I would not object
to someone else taking it upon themselves to convert it to sbuf.
I hold no MAINTIANER rights to this code (for now).
2001-04-27 22:24:45 +00:00
grog
1f5de30718 Correct #includes to work with fixed sys/mount.h. 2001-04-23 09:05:15 +00:00
alfred
1e50a5f33e vnode_pager_freepage() is really vm_page_free() in disguise,
nuke vnode_pager_freepage() and replace all calls to it with vm_page_free()
2001-04-19 06:18:23 +00:00
alfred
e6ee97803e Protect pager object creation with sx locks.
Protect pager object list manipulation with a mutex.

It doesn't look possible to combine them under a single sx lock because
creation may block and we can't have the object list manipulation block
on anything other than a mutex because of interrupt requests.
2001-04-18 20:24:16 +00:00
alfred
cd79f17cad Fix the botched rev 1.59 where I made it such that without INVARIANTS
the map is never locked.

Submitted by: tegge
2001-04-18 05:30:24 +00:00
phk
378e561228 This patch removes the VOP_BWRITE() vector.
VOP_BWRITE() was a hack which made it possible for NFS client
side to use struct buf with non-bio backing.

This patch takes a more general approach and adds a bp->b_op
vector where more methods can be added.

The success of this patch depends on bp->b_op being initialized
all relevant places for some value of "relevant" which is not
easy to determine.  For now the buffers have grown a b_magic
element which will make such issues a tiny bit easier to debug.
2001-04-17 08:56:39 +00:00
alfred
ec0e3ba359 use TAILQ_FOREACH, fix a comment's location 2001-04-15 10:22:04 +00:00
alfred
7e6ce027ec if/panic -> KASSERT 2001-04-13 11:15:40 +00:00
alfred
bbee48d66d protect pbufs and associated counts with a mutex 2001-04-13 10:23:32 +00:00
alfred
bcfbf5a27d use %p for pointer printf, include sys/systm.h for printf proto 2001-04-13 10:22:14 +00:00
alfred
7dcb59378d Use a macro wrapper over printf along with KASSERT to reduce the amount
of code here.
2001-04-13 08:07:37 +00:00
alfred
229635845b remove truncated part from commment 2001-04-12 21:50:03 +00:00
jhb
79cf991a6b Convert the allproc and proctree locks from lockmgr locks to sx locks. 2001-03-28 11:52:56 +00:00
jhb
b47bfbe544 Catch up to header include changes:
- <sys/mutex.h> now requires <sys/systm.h>
- <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
2001-03-28 09:17:56 +00:00
tmm
76ba4861cd Export intrnames and intrcnt as sysctls (hw.nintr, hw.intrnames and
hw.intrcnt).

Approved by:	rwatson
2001-03-23 03:45:17 +00:00
dillon
be9d069cf0 Fix a lock reversal problem in the VM subsystem related to threaded
programs.   There is a case during a fork() which can cause a deadlock.

From Tor -
The workaround that consists of setting a flag in the vm map that
indicates that a fork is in progress and using that mark in the page
fault handling to force a revalidation failure.  That change will only
affect (pessimize) page fault handling during fork for threaded
(linuxthreads style) applications and applications using aio_*().

Submited by: tegge
2001-03-14 06:48:53 +00:00
dillon
3b687e6575 Temporarily remove the vm_map_simplify() call from vm_map_insert(). The
call is correct, but it interferes with the massive hack called
vm_map_growstack().  The call will be returned after our stack handling
code is fixed.

Reported by: tegge
2001-03-14 06:09:42 +00:00
iedowse
80d8e16746 When creating a shadow vm_object in vmspace_fork(), only one
reference count was transferred to the new object, but both the
new and the old map entries had pointers to the new object.
Correct this by transferring the second reference.

This fixes a panic that can occur when mmap(2) is used with the
MAP_INHERIT flag.

PR:		i386/25603
Reviewed by:	dillon, alc
2001-03-09 18:25:54 +00:00
jhb
1de39ad1a0 Unrevert the pmap_map() changes. They weren't broken on x86.
Sense beaten into me by:	peter
2001-03-07 05:29:21 +00:00
jhb
74a74a3282 Back out the pmap_map() change for now, it isn't completely stable on the
i386.
2001-03-07 01:04:17 +00:00
jhb
a710dd7194 - Rework pmap_map() to take advantage of direct-mapped segments on
supported architectures such as the alpha.  This allows us to save
  on kernel virtual address space, TLB entries, and (on the ia64) VHPT
  entries.  pmap_map() now modifies the passed in virtual address on
  architectures that do not support direct-mapped segments to point to
  the next available virtual address.  It also returns the actual
  address that the request was mapped to.
- On the IA64 don't use a special zone of PV entries needed for early
  calls to pmap_kenter() during pmap_init().  This gets us in trouble
  because we end up trying to use the zone allocator before it is
  initialized.  Instead, with the pmap_map() change, the number of needed
  PV entries is small enough that we can get by with a static pool that is
  used until pmap_init() is complete.

Submitted by:		dfr
Debugging help:		peter
Tested by:		me
2001-03-06 06:06:42 +00:00
alfred
1c4e3d51b7 Simplify vm_object_deallocate(), by decrementing the refcount first.
This allows some of the conditionals to be combined.
2001-03-04 20:25:23 +00:00
gallatin
845eeac4fc Allocate vm_page_array and vm_page_buckets from the end of the biggest chunk
of memory, rather than from the start.

This fixes problems allocating bouncebuffers on alphas where there is only
1 chunk of memory (unlike PCs where there is generally at least one small
chunk and a large chunk).  Having 1 chunk had been fatal, because these
structures take over 13MB on a machine with 1GB of ram. This doesn't leave
much room for other structures and bounce buffers if they're at the front.

Reviewed by: dfr, anderson@cs.duke.edu, silence on -arch
Tested by: Yoriaki FUJIMORI <fujimori@grafin.fujimori.cache.waseda.ac.jp>
2001-03-01 19:21:24 +00:00
dillon
86a16a841c If we intend to make the page writable without requiring another fault,
make sure that PG_NOSYNC is properly set.  Previously we only set it
for a write-fault, but this can occur on a read-fault too.
(will be MFCd prior to 4.3 freeze)
2001-02-28 04:26:43 +00:00
rwatson
9a2d215a5f Introduce per-swap area accounting in the VM system, and export
this information via the vm.nswapdev sysctl (number of swap areas)
and vm.swapdevX nodes (where X is the device), which contain the MIBs
dev, blocks, used, and flags.  These changes are required to allow
top and other userland swap-monitoring utilities to run without
setgid kmem.

Submitted by:	Thomas Moestl <tmoestl@gmx.net>
Reviewed by:	freebsd-audit
2001-02-23 18:46:21 +00:00
des
8862541dfd Fix formatting bugs introduced in sysctl_vm_zone() by the previous commit.
Also, if SYSCTL_OUT() returns a non-zero value, stop at once.
2001-02-22 14:44:39 +00:00
jake
55d5108ac5 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
phk
e87f7a15ad Mechanical change to use <sys/queue.h> macro API instead of
fondling implementation details.

Created with: sed(1)
Reviewed by: md5(1)
2001-02-04 13:13:25 +00:00
dillon
c8a95a285d This commit represents work mainly submitted by Tor and slightly modified
by myself.  It solves a serious vm_map corruption problem that can occur
with the buffer cache when block sizes > 64K are used.  This code has been
heavily tested in -stable but only tested somewhat on -current.  An MFC
will occur in a few days.  My additions include the vm_map_simplify_entry()
and minor buffer cache boundry case fix.

Make the buffer cache use a system map for buffer cache KVM rather then a
normal map.

Ensure that VM objects are not allocated for system maps.  There were cases
where a buffer map could wind up with a backing VM object -- normally
harmless, but this could also result in the buffer cache blocking in places
where it assumes no blocking will occur, possibly resulting in corrupted
maps.

Fix a minor boundry case in the buffer cache size limit is reached that
could result in non-optimal code.

Add vm_map_simplify_entry() calls to prevent 'creeping proliferation'
of vm_map_entry's in the buffer cache's vm_map.  Previously only a simple
linear optimization was made.  (The buffer vm_map typically has only a
handful of vm_map_entry's.  This stabilizes it at that level permanently).

PR: 20609
Submitted by: (Tor Egge) tegge
2001-02-04 06:19:28 +00:00
jhb
ef07536d23 - Doh, lock faultin() with proc lock in scheduler().
- Lock p_swtime with sched_lock in scheduler() as well.
2001-01-25 01:38:09 +00:00
jasone
8d2ec1ebc4 Convert all simplelocks to mutexes and remove the simplelock implementations. 2001-01-24 12:35:55 +00:00
jhb
c5cc2f8e26 Argh, I didn't get this test right when I converted it. Break this up
into two separate if's instead of nested if's.  Also, reorder things
slightly to avoid unnecessary mutex operations.
2001-01-24 12:23:17 +00:00
jhb
9d4ef4f19c - Catch up to proc flag changes.
- Minimal proc locking.
- Use queue macros.
2001-01-24 11:28:36 +00:00
jhb
f2190e5fef Add mtx_assert()'s to verify that kmem_alloc() and kmem_free() are called
with Giant held.
2001-01-24 11:27:29 +00:00
jhb
9e1a5e2c5b - Catch up to proc flag changes.
- Proc locking in a few places.
- faultin() now must be called with the proc lock held.
- Split up swappable() into a couple of tests so that it can be locke in
  swapout_procs().
- Use queue macros.
2001-01-24 11:25:56 +00:00
jhb
963052ead7 - Catch up to proc flag changes. 2001-01-24 11:20:05 +00:00
jhb
85f26795ce Add missing include. 2001-01-24 06:54:24 +00:00
ume
1ae749987d Add mibs to hold the number of forks since boot. New mibs are:
vm.stats.vm.v_forks
	vm.stats.vm.v_vforks
	vm.stats.vm.v_rforks
	vm.stats.vm.v_kthreads
	vm.stats.vm.v_forkpages
	vm.stats.vm.v_vforkpages
	vm.stats.vm.v_rforkpages
	vm.stats.vm.v_kthreadpages

Submitted by:	Paul Herman <pherman@frenchfries.net>
Reviewed by:	alfred
2001-01-23 14:32:01 +00:00
jake
13afa1fa68 Sigh. atomic_add_int takes a pointer, not an integer.
Pointy-hat-to:	des
2001-01-23 03:40:27 +00:00
des
7e93e90119 Use atomic operations to update the stat counters. 2001-01-23 01:11:11 +00:00
des
30bc358ebb Call vm_zone_init() at the appropriate time.
Reviewed by:	jasone, jhb
2001-01-22 07:02:42 +00:00
des
6929628df0 Give this code a major facelift:
- replace the simplelock in struct vm_zone with a mutex.

 - use a proper SLIST rather than a hand-rolled job for the zone list.

 - add a subsystem lock that protects the zone list and the statistics
   counters.

 - merge _zalloc() into zalloc() and _zfree() into zfree(), and
   move them below _zget() so there's no need for a prototype.

 - add two initialization functions: one which initializes the
   subsystem mutex and the zone list, and one that currently doesn't
   do anything.

 - zap zerror(); use KASSERTs instead.

 - dike out half of sysctl_vm_zone(), which was mostly trying to do
   manually what the snprintf() call could do better.

Reviewed by:	jhb, jasone
2001-01-22 07:01:50 +00:00
des
b3c27aaaf7 First step towards an MP-safe zone allocator:
- have zalloc() and zfree() always lock the vm_zone.
 - remove zalloci() and zfreei(), which are now redundant.

Reviewed by:	bmilekic, jasone
2001-01-21 22:23:11 +00:00
alfred
f6aeda2758 fix comment which was outdated 3 years ago
remove useless assignment
purge entire file of 'register' keyword
2000-12-29 13:49:05 +00:00
alfred
7ab79347db clean up kmem_suballoc():
remove useless assignment
remove 'register' variables
2000-12-29 13:05:22 +00:00
assar
16ec886a24 Make zalloc and zfree non-inline functions. This avoids having to
have the code calling these be compiled with the same setting for
INVARIANTS and SMP.

Reviewed by:	dillon
2000-12-27 02:54:37 +00:00
dillon
fd223545d4 This implements a better launder limiting solution. There was a solution
in 4.2-REL which I ripped out in -stable and -current when implementing the
low-memory handling solution.  However, maxlaunder turns out to be the saving
grace in certain very heavily loaded systems (e.g. newsreader box).  The new
algorithm limits the number of pages laundered in the first pageout daemon
pass.  If that is not sufficient then suceessive will be run without any
limit.

Write I/O is now pipelined using two sysctls, vfs.lorunningspace and
vfs.hirunningspace.  This prevents excessive buffered writes in the
disk queues which cause long (multi-second) delays for reads.  It leads
to more stable (less jerky) and generally faster I/O streaming to disk
by allowing required read ops (e.g. for indirect blocks and such) to occur
without interrupting the write stream, amoung other things.

NOTE: eventually, filesystem write I/O pipelining needs to be done on a
per-device basis.  At the moment it is globalized.
2000-12-26 19:41:38 +00:00
phk
96e683d754 Fix floppy drives on machines with lots of RAM.
The fix works by reverting the ordering of free memory so that the
chances of contig_malloc() succeeding increases.

PR:		23291
Submitted by:	Andrew Atrens <atrens@nortel.ca>
2000-12-18 20:12:13 +00:00
tanimura
a8dbeb3f7b - If swap metadata does not fit into the KVM, reduce the number of
struct swblock entries by dividing the number of the entries by 2
until the swap metadata fits.

- Reject swapon(2) upon failure of swap_zone allocation.

This is just a temporary fix. Better solutions include:
(suggested by:	dillon)

o reserving swap in SWAP_META_PAGES chunks, and
o swapping the swblock structures themselves.

Reviewed by:	alfred, dillon
2000-12-13 10:01:00 +00:00
jake
a4ad237eaa - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
dillon
7317f38702 Be less conservative with a recently added KASSERT. Certain edge
cases with file fragments and read-write mmap's can lead to a situation
    where a VM page has odd dirty bits, e.g. 0xFC - due to being dirtied by
    an mmap and only the fragment (representing a non-page-aligned end of
    file) synced via a filesystem buffer.  A correct solution that
    guarentees consistent m->dirty for the file EOF case is being
    worked on.  In the mean time we can't be so conservative in the
    KASSERT.
2000-12-11 07:52:47 +00:00
dwmalone
dd75d1d73b Convert more malloc+bzero to malloc+M_ZERO.
Submitted by:	josh@zipperup.org
Submitted by:	Robert Drehmel <robd@gmx.net>
2000-12-08 21:51:06 +00:00
alfred
012cf93dca Really fix phys_pager:
Backout the previous delta (rev 1.4), it didn't make any difference.

If the requested handle is NULL then don't add it to the list of
objects, to be found by handle.

The problem is that when asking for a NULL handle you are implying
you want a new object.  Because objects with NULL handles were
being added to the list, any further requests for phys backed
objects with NULL handles would return a reference to the initial
NULL handle object after finding it on the list.

Basically one couldn't have more than one phys backed object without
a handle in the entire system without this fix.  If you did more
than one shared memory allocation using the phys pager it would
give you your initial allocation again.
2000-12-06 21:52:23 +00:00
alfred
f9b418f187 need to adjust allocation size to properly deal with non PAGE_SIZE
allocations, specifically with allocations < PAGE_SIZE when the code
doesn't work properly
2000-12-05 22:22:24 +00:00
bde
2f9fb1a43d Backed out previous commit. Don't depend on namespace pollution in
<sys/buf.h>.
2000-12-02 12:03:58 +00:00
jhb
967c93e448 Protect p_stat with sched_lock. 2000-12-02 06:09:44 +00:00
jhb
b8b5877ce7 Protect p_stat with sched_lock. 2000-12-02 03:29:33 +00:00
alfred
663be26bfa remove unneded sys/ucred.h includes 2000-11-30 18:52:32 +00:00
jake
0c0be4e826 Protect the following with a lockmgr lock:
allproc
	zombproc
	pidhashtbl
	proc.p_list
	proc.p_hash
	nextpid

Reviewed by:	jhb
Obtained from:	BSD/OS and netbsd
2000-11-22 07:42:04 +00:00
rwatson
4a8be80602 o Export dmmax ("Maximum size of a swap block") using SYSCTL_INT.
This removes a reason that systat requires setgid kmem.  More to
  come.
2000-11-20 00:39:04 +00:00
dillon
2ace352085 Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
    situations prior to now.

    The new code is based on the concept that I/O must be able to function in
    a low memory situation.  All major modules related to I/O (except
    networking) have been adjusted to allow allocation out of the system
    reserve memory pool.  These modules now detect a low memory situation but
    rather then block they instead continue to operate, then return resources
    to the memory pool instead of cache them or leave them wired.

    Code has been added to stall in a low-memory situation prior to a vnode
    being locked.

    Thus situations where a process blocks in a low-memory condition while
    holding a locked vnode have been reduced to near nothing.  Not only will
    I/O continue to operate, but many prior deadlock conditions simply no
    longer exist.

Implement a number of VFS/BIO fixes

	(found by Ian): in biodone(), bogus-page replacement code, the loop
        was not properly incrementing loop variables prior to a continue
        statement.  We do not believe this code can be hit anyway but we
        aren't taking any chances.  We'll turn the whole section into a
        panic (as it already is in brelse()) after the release is rolled.

	In biodone(), the foff calculation was incorrectly
        clamped to the iosize, causing the wrong foff to be calculated
        for pages in the case of an I/O error or biodone() called without
        initiating I/O.  The problem always caused a panic before.  Now it
        doesn't.  The problem is mainly an issue with NFS.

	Fixed casts for ~PAGE_MASK.  This code worked properly before only
        because the calculations use signed arithmatic.  Better to properly
        extend PAGE_MASK first before inverting it for the 64 bit masking
        op.

	In brelse(), the bogus_page fixup code was improperly throwing
        away the original contents of 'm' when it did the j-loop to
        fix the bogus pages.  The result was that it would potentially
        invalidate parts of the *WRONG* page(!), leading to corruption.

	There may still be cases where a background bitmap write is
        being duplicated, causing potential corruption.  We have identified
        a potentially serious bug related to this but the fix is still TBD.
        So instead this patch contains a KASSERT to detect the problem
  	and panic the machine rather then continue to corrupt the filesystem.
	The problem does not occur very often..  it is very hard to
	reproduce, and it may or may not be the cause of the corruption
	people have reported.

Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
dillon
59e131028f Add the splvm()'s suggested in PR 20609 to protect vm_pager_page_unswapped().
The remainder of the PR is still open.

PR: kern/20609 (partial fix)
2000-11-18 21:11:23 +00:00
dillon
15a44d16ca This patchset fixes a large number of file descriptor race conditions.
Pre-rfork code assumed inherent locking of a process's file descriptor
    array.  However, with the advent of rfork() the file descriptor table
    could be shared between processes.  This patch closes over a dozen
    serious race conditions related to one thread manipulating the table
    (e.g. closing or dup()ing a descriptor) while another is blocked in
    an open(), close(), fcntl(), read(), write(), etc...

PR: kern/11629
Discussed with: Alexander Viro <viro@math.psu.edu>
2000-11-18 21:01:04 +00:00
tegge
5262a5cdb7 Clear the MAP_ENTRY_USER_WIRED flag from cloned vm_map entries.
PR:		2840
2000-11-02 21:38:18 +00:00
phk
f82e4ca62c Weaken a bogus dependency on <sys/proc.h> in <sys/buf.h> by #ifdef'ing
the offending inline function (BUF_KERNPROC) on it being #included
already.

I'm not sure BUF_KERNPROC() is even the right thing to do or in the
right place or implemented the right way (inline vs normal function).

Remove consequently unneeded #includes of <sys/proc.h>
2000-10-29 14:54:55 +00:00
jhb
9ae17765f4 - Catch a machine/mutex.h -> sys/mutex.h I somehow missed.
- Close a small race condition.  The sched_lock mutex protects
  p->p_stat as well as the run queues.  Another CPU could change p_stat
  of the process while we are waiting for the lock, and we would end up
  scheduling a process that isn't runnable.
2000-10-25 00:04:16 +00:00
ps
c71ac689e0 Implement write combining for crashdumps. This is useful when
write caching is disabled on both SCSI and IDE disks where large
memory dumps could take up to an hour to complete.

Taking an i386 scsi based system with 512MB of ram and timing (in
seconds) how long it took to complete a dump, the following results
were obtained:

Before:				After:
	WCE           TIME		WCE           TIME
	------------------		------------------
	1	141.820972		1	 15.600111
	0	797.265072		0	 65.480465

Obtained from:	Yahoo!
Reviewed by:	peter
2000-10-17 10:05:49 +00:00
dillon
b9b696f1ab The swap bitmap allocator was not calculating the bitmap size properly
in the face of non-stripe-aligned swap areas.  The bug could cause a
      panic during boot.

      Refuse to configure a swap area that is too large (67 GB or so)

      Properly document the power-of-2 requirement for SWB_NPAGES.

      The patch is slightly different then the one Tor enclosed in the P.R.,
      but accomplishes the same thing.

PR: kern/20273
Submitted by: Tor.Egge@fast.no
2000-10-13 16:44:34 +00:00
jasone
db944ecf00 For lockmgr mutex protection, use an array of mutexes that are allocated
and initialized during boot.  This avoids bloating sizeof(struct lock).
As a side effect, it is no longer necessary to enforce the assumtion that
lockinit()/lockdestroy() calls are paired, so the LK_VALID flag has been
removed.

Idea taken from:	BSD/OS.
2000-10-12 22:37:28 +00:00
dwmalone
16be903415 If a process is over its resource limit for datasize, still allow
it to lower its memory usage. This was mentioned on the mailing
lists ages ago, and I've lost the name of the person who brought
it up.

Reviewed by:	alc
2000-10-06 13:03:50 +00:00
jasone
4e290e67b7 Convert lockmgr locks from using simple locks to using mutexes.
Add lockdestroy() and appropriate invocations, which corresponds to
lockinit() and must be called to clean up after a lockmgr lock is no
longer needed.
2000-10-04 01:29:17 +00:00
jhb
6aa22e7189 - Add a new process flag P_NOLOAD that marks a process that should be
ignored during load average calcuations.
- Set this flag for the idle processes and the softinterrupt process.
2000-09-15 22:00:23 +00:00
bp
a7bc78c86d Add three new VOPs: VOP_CREATEVOBJECT, VOP_DESTROYVOBJECT and VOP_GETVOBJECT.
They will be used by nullfs and other stacked filesystems to support full
cache coherency.

Reviewed in general by:	mckusick, dillon
2000-09-12 09:49:08 +00:00
jasone
769e0f974d Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00
obrien
dba23c4faf Make the arguments match the functionality of the functions. 2000-08-26 04:51:39 +00:00
peter
441c016268 Minor cleanups:
- remove unused variables (fix warnings)
 - use a more consistant ansi style rather than a mixture
 - remove dead #if 0 code and declarations
2000-07-28 22:03:08 +00:00
mckusick
b86877bef0 Clean up the snapshot code so that it no longer depends on the use of
the SF_IMMUTABLE flag to prevent writing. Instead put in explicit
checking for the SF_SNAPSHOT flag in the appropriate places. With
this change, it is now possible to rename and link to snapshot files.
It is also possible to set or clear any of the owner, group, or
other read bits on the file, though none of the write or execute
bits can be set. There is also an explicit test to prevent the
setting or clearing of the SF_SNAPSHOT flag via chflags() or
fchflags(). Note also that the modify time cannot be changed as
it needs to accurately reflect the time that the snapshot was taken.

Submitted by:	Robert Watson <rwatson@FreeBSD.org>
2000-07-26 23:07:01 +00:00
mckusick
a3d0c189ea Add snapshots to the fast filesystem. Most of the changes support
the gating of system calls that cause modifications to the underlying
filesystem. The gating can be enabled by any filesystem that needs
to consistently suspend operations by adding the vop_stdgetwritemount
to their set of vnops. Once gating is enabled, the function
vfs_write_suspend stops all new write operations to a filesystem,
allows any filesystem modifying system calls already in progress
to complete, then sync's the filesystem to disk and returns. The
function vfs_write_resume allows the suspended write operations to
begin again. Gating is not added by default for all filesystems as
for SMP systems it adds two extra locks to such critical kernel
paths as the write system call. Thus, gating should only be added
as needed.

Details on the use and current status of snapshots in FFS can be
found in /sys/ufs/ffs/README.snapshot so for brevity and timelyness
is not included here. Unless and until you create a snapshot file,
these changes should have no effect on your system (famous last words).
2000-07-11 22:07:57 +00:00
alfred
4036af0a9c #elsif -> #elif
Noticed by: green
2000-07-11 09:41:29 +00:00
jhb
b6e74b58eb Support for unsigned integer and long sysctl variables. Update the
SYSCTL_LONG macro to be consistent with other integer sysctl variables
and require an initial value instead of assuming 0.  Update several
sysctl variables to use the unsigned types.

PR:		15251
Submitted by:	Kelly Yancey <kbyanc@posi.net>
2000-07-05 07:46:41 +00:00
phk
e5de271d47 Previous commit changing SYSCTL_HANDLER_ARGS violated KNF.
Pointed out by:	bde
2000-07-04 11:25:35 +00:00
jhb
183fce400a Replace the PQ_*CACHE options with a single PQ_CACHESIZE option that you
set equal to the number of kilobytes in your cache.  The old options are
still supported for backwards compatibility.

Submitted by:	Kelly Yancey <kbyanc@posi.net>
2000-07-04 08:55:18 +00:00
mckusick
2f0e9591fa Simplify and rationalise the management of the vnode free list
(preparing the code to add snapshots).
2000-07-04 04:32:40 +00:00
phk
61ff05be25 Style police catches up with rev 1.26 of src/sys/sys/sysctl.h:
Sanitize SYSCTL_HANDLER_ARGS so that simplistic tools can grog our
sources:

        -sysctl_vm_zone SYSCTL_HANDLER_ARGS
        +sysctl_vm_zone (SYSCTL_HANDLER_ARGS)
2000-07-03 09:35:31 +00:00
markm
53a44ce9d3 Nifty idea from Jeroen van Gelderen; don't call a routine to check if
we are using the /dev/zero device, just check a flag (supplied by
/dev/zero).
Reviewed by:	dfr
2000-06-25 09:44:32 +00:00
hsu
ce8639fd44 Add missing increment of allocation counter. 2000-06-05 06:34:41 +00:00
dillon
82627e96a0 This is a cleanup patch to Peter's new OBJT_PHYS VM object type
and sysv shared memory support for it.  It implements a new
    PG_UNMANAGED flag that has slightly different characteristics
    from PG_FICTICIOUS.

    A new sysctl, kern.ipc.shm_use_phys has been added to enable the
    use of physically-backed sysv shared memory rather then swap-backed.
    Physically backed shm segments are not tracked with PV entries,
    allowing programs which use a large shm segment as a rendezvous
    point to operate without eating an insane amount of KVM in the
    PV entry management.  Read: Oracle.

    Peter's OBJT_PHYS object will also allow us to eventually implement
    page-table sharing and/or 4MB physical page support for such segments.
    We're half way there.
2000-05-29 22:40:54 +00:00
dfr
14048face6 Brucify the pmap_enter_temporary() changes. 2000-05-29 19:21:01 +00:00
dillon
7832cdab03 Fix bug in vm_pageout_page_stats() that always resulted in a full
scan of the active queue.  This fix is not expected to have any
    noticeable impact on performance.

Noticed by: Rik van Riel <riel@conectiva.com.br>
2000-05-29 02:31:55 +00:00
dfr
3d3263476e Add a new pmap entry point, pmap_enter_temporary() to be used during
dumps to create temporary page mappings. This replaces the use of CADDR1
which is fairly x86 specific.

Reviewed by: dillon
2000-05-28 15:49:55 +00:00
jake
961b97d434 Back out the previous change to the queue(3) interface.
It was not discussed and should probably not happen.

Requested by:		msmith and others
2000-05-26 02:09:24 +00:00