Commit Graph

1001 Commits

Author SHA1 Message Date
John Baldwin
576f0c5fa4 Take a more conservative approach and still lock Giant around VM faults
for now.
2001-05-23 22:09:18 +00:00
John Baldwin
c52f090cfb Set the phys_pager_alloc_lock to 1 when it is acquired so that it is
actually locked.
2001-05-23 19:52:23 +00:00
Alfred Perlstein
c5e62505ad aquire Giant when playing with the buffercache and doing IO.
use msleep against the vm mutex while waiting for a page IO to complete.
2001-05-23 10:28:11 +00:00
Alfred Perlstein
240e0fdd93 aquire vm mutex in swp_pager_async_iodone. Don't call swp_pager_async_iodone
with the mutex held.
2001-05-22 19:01:26 +00:00
John Baldwin
86e92ee7e1 Remove duplicate include and sort includes. 2001-05-22 07:21:46 +00:00
John Baldwin
7d4ad42de5 Sort includes. 2001-05-22 07:01:11 +00:00
John Baldwin
12635f9c89 Unlock the VM lock at the end of munlock() instead of locking it again. 2001-05-22 06:07:36 +00:00
John Baldwin
874468957d Sort includes from previous commit. 2001-05-22 05:35:45 +00:00
John Baldwin
4edf4a58e6 Sort includes. 2001-05-22 00:56:25 +00:00
Alfred Perlstein
2395531439 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
John Baldwin
ea7549540f - Use a timeout for the tsleep in scheduler() instead of having vmmeter()
wakeup proc0 by hand to enforce the timeout.
- When swapping out a process, keep the process locked via the proc lock
  from the first checks up until we clear PS_INMEM and set PS_SWAPPING in
  swapout().  The swapout() function now must be called with the proc lock
  held and releases it before returning.
- Comment out the code to attempt to lock a process' VM structures before
  swapping out.  It is broken in that it releases the lock after obtaining
  it.  If it does grab the lock, it needs to hand it off to swapout()
  instead of releasing it.  This can be revisisted when the VM is locked
  as this is a valid test to perform.  It also causes a lock order reversal
  for the time being, which is the immediate cause for temporarily
  disabling it.
2001-05-18 00:08:38 +00:00
John Baldwin
1c58e4e550 During the code to pick a process to kill when memory is exhausted, keep
the process in question locked as soon as we find it and determine it to
be eligible until we actually kill it.  To avoid deadlock, we don't block
on the process lock but skip any process that is already locked during our
search.
2001-05-17 22:49:03 +00:00
John Baldwin
c96d52a913 - Use PROC_LOCK_ASSERT instead of a direct mtx_assert.
- Don't hold Giant in the swapper daemon while we walk the list of
  processes looking for a process to swap back in.
- Don't bother grabbing the sched_lock while checking a process' sleep
  time in swapout_procs() to ensure that a process has been idle for at
  least swap_idle_threshold2 before swapping it out.  If we lose the race
  we just let a process stay in memory until the next call of
  swapout_procs().
- Remove some unneeded spl's, sched_lock does all the locking needed in
  this case.
2001-05-15 22:20:44 +00:00
Poul-Henning Kamp
a468031ce8 Actually biofinish(struct bio *, struct devstat *, int error) is more general
than the bioerror().

Most of this patch is generated by scripts.
2001-05-06 20:00:03 +00:00
Mark Murray
559034b748 Putting sys/lockmgr.h in here allows us to depollute userland includes
a bit.
OK'ed by:	bde
2001-05-03 11:33:51 +00:00
Mark Murray
fb919e4d5a Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
Greg Lehey
60fb0ce365 Revert consequences of changes to mount.h, part 2.
Requested by:	bde
2001-04-29 02:45:39 +00:00
Alfred Perlstein
93c7ba9f09 Address a number of problems with sysctl_vm_zone().
The zone allocator's locks should be leaflocks, meaning that they
should never be held when entering into another subsystem, however
the sysctl grabs the zone global mutex and individual zone mutexes
while holding the lock it calls SYSCTL_OUT which recurses into the
VM subsystem in order to wire user memory to do a safe copy.  This
can block and cause lock order reversals.

To fix this:
  lock zone global.
  get a count of the number of zones.
  unlock global.
  allocate temporary storage.
  format and SYSCTL_OUT the banner.
  lock global.
  traverse list.
    make sure we haven't looped more than the initial count taken
      to avoid overflowing the allocated buffer.
    lock each nodes.
    read values and format into buffer.
    unlock individual node.
  unlock global.
  format and SYSCTL_OUT the rest of the data.
  free storage.
  return.

Other problems included not checking for errors when doing sysctl out
of the column header.  Fixed.

Inconsistant termination of the copied string. Fixed.

Objected to by: des (for not using sbuf)

Since the output is not variable length and I'm actually over
allocating signifigantly and I'd like to get this fixed now, I'll
work on the sbuf convertion at a later date.  I would not object
to someone else taking it upon themselves to convert it to sbuf.
I hold no MAINTIANER rights to this code (for now).
2001-04-27 22:24:45 +00:00
Greg Lehey
d98dc34f52 Correct #includes to work with fixed sys/mount.h. 2001-04-23 09:05:15 +00:00
Alfred Perlstein
d8d5fa8805 vnode_pager_freepage() is really vm_page_free() in disguise,
nuke vnode_pager_freepage() and replace all calls to it with vm_page_free()
2001-04-19 06:18:23 +00:00
Alfred Perlstein
a9fa2c05fc Protect pager object creation with sx locks.
Protect pager object list manipulation with a mutex.

It doesn't look possible to combine them under a single sx lock because
creation may block and we can't have the object list manipulation block
on anything other than a mutex because of interrupt requests.
2001-04-18 20:24:16 +00:00
Alfred Perlstein
305dd591ee Fix the botched rev 1.59 where I made it such that without INVARIANTS
the map is never locked.

Submitted by: tegge
2001-04-18 05:30:24 +00:00
Poul-Henning Kamp
f84e29a06c This patch removes the VOP_BWRITE() vector.
VOP_BWRITE() was a hack which made it possible for NFS client
side to use struct buf with non-bio backing.

This patch takes a more general approach and adds a bp->b_op
vector where more methods can be added.

The success of this patch depends on bp->b_op being initialized
all relevant places for some value of "relevant" which is not
easy to determine.  For now the buffers have grown a b_magic
element which will make such issues a tiny bit easier to debug.
2001-04-17 08:56:39 +00:00
Alfred Perlstein
cc64b484dd use TAILQ_FOREACH, fix a comment's location 2001-04-15 10:22:04 +00:00
Alfred Perlstein
971dd34298 if/panic -> KASSERT 2001-04-13 11:15:40 +00:00
Alfred Perlstein
2a758ebe58 protect pbufs and associated counts with a mutex 2001-04-13 10:23:32 +00:00
Alfred Perlstein
493607117e use %p for pointer printf, include sys/systm.h for printf proto 2001-04-13 10:22:14 +00:00
Alfred Perlstein
7d26b6a450 Use a macro wrapper over printf along with KASSERT to reduce the amount
of code here.
2001-04-13 08:07:37 +00:00
Alfred Perlstein
b28cb1ca07 remove truncated part from commment 2001-04-12 21:50:03 +00:00
John Baldwin
1005a129e5 Convert the allproc and proctree locks from lockmgr locks to sx locks. 2001-03-28 11:52:56 +00:00
John Baldwin
f34fa851e0 Catch up to header include changes:
- <sys/mutex.h> now requires <sys/systm.h>
- <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
2001-03-28 09:17:56 +00:00
Thomas Moestl
368d2edce4 Export intrnames and intrcnt as sysctls (hw.nintr, hw.intrnames and
hw.intrcnt).

Approved by:	rwatson
2001-03-23 03:45:17 +00:00
Matthew Dillon
b823bbd6be Fix a lock reversal problem in the VM subsystem related to threaded
programs.   There is a case during a fork() which can cause a deadlock.

From Tor -
The workaround that consists of setting a flag in the vm map that
indicates that a fork is in progress and using that mark in the page
fault handling to force a revalidation failure.  That change will only
affect (pessimize) page fault handling during fork for threaded
(linuxthreads style) applications and applications using aio_*().

Submited by: tegge
2001-03-14 06:48:53 +00:00
Matthew Dillon
1a484d28dd Temporarily remove the vm_map_simplify() call from vm_map_insert(). The
call is correct, but it interferes with the massive hack called
vm_map_growstack().  The call will be returned after our stack handling
code is fixed.

Reported by: tegge
2001-03-14 06:09:42 +00:00
Ian Dowse
d30344bdfa When creating a shadow vm_object in vmspace_fork(), only one
reference count was transferred to the new object, but both the
new and the old map entries had pointers to the new object.
Correct this by transferring the second reference.

This fixes a panic that can occur when mmap(2) is used with the
MAP_INHERIT flag.

PR:		i386/25603
Reviewed by:	dillon, alc
2001-03-09 18:25:54 +00:00
John Baldwin
136d8f42b9 Unrevert the pmap_map() changes. They weren't broken on x86.
Sense beaten into me by:	peter
2001-03-07 05:29:21 +00:00
John Baldwin
4a01ebd482 Back out the pmap_map() change for now, it isn't completely stable on the
i386.
2001-03-07 01:04:17 +00:00
John Baldwin
968950e5d1 - Rework pmap_map() to take advantage of direct-mapped segments on
supported architectures such as the alpha.  This allows us to save
  on kernel virtual address space, TLB entries, and (on the ia64) VHPT
  entries.  pmap_map() now modifies the passed in virtual address on
  architectures that do not support direct-mapped segments to point to
  the next available virtual address.  It also returns the actual
  address that the request was mapped to.
- On the IA64 don't use a special zone of PV entries needed for early
  calls to pmap_kenter() during pmap_init().  This gets us in trouble
  because we end up trying to use the zone allocator before it is
  initialized.  Instead, with the pmap_map() change, the number of needed
  PV entries is small enough that we can get by with a static pool that is
  used until pmap_init() is complete.

Submitted by:		dfr
Debugging help:		peter
Tested by:		me
2001-03-06 06:06:42 +00:00
Alfred Perlstein
8125b1e66e Simplify vm_object_deallocate(), by decrementing the refcount first.
This allows some of the conditionals to be combined.
2001-03-04 20:25:23 +00:00
Andrew Gallatin
c909b97167 Allocate vm_page_array and vm_page_buckets from the end of the biggest chunk
of memory, rather than from the start.

This fixes problems allocating bouncebuffers on alphas where there is only
1 chunk of memory (unlike PCs where there is generally at least one small
chunk and a large chunk).  Having 1 chunk had been fatal, because these
structures take over 13MB on a machine with 1GB of ram. This doesn't leave
much room for other structures and bounce buffers if they're at the front.

Reviewed by: dfr, anderson@cs.duke.edu, silence on -arch
Tested by: Yoriaki FUJIMORI <fujimori@grafin.fujimori.cache.waseda.ac.jp>
2001-03-01 19:21:24 +00:00
Matthew Dillon
5bf53acb74 If we intend to make the page writable without requiring another fault,
make sure that PG_NOSYNC is properly set.  Previously we only set it
for a write-fault, but this can occur on a read-fault too.
(will be MFCd prior to 4.3 freeze)
2001-02-28 04:26:43 +00:00
Robert Watson
edfa785a8e Introduce per-swap area accounting in the VM system, and export
this information via the vm.nswapdev sysctl (number of swap areas)
and vm.swapdevX nodes (where X is the device), which contain the MIBs
dev, blocks, used, and flags.  These changes are required to allow
top and other userland swap-monitoring utilities to run without
setgid kmem.

Submitted by:	Thomas Moestl <tmoestl@gmx.net>
Reviewed by:	freebsd-audit
2001-02-23 18:46:21 +00:00
Dag-Erling Smørgrav
2f9564de0f Fix formatting bugs introduced in sysctl_vm_zone() by the previous commit.
Also, if SYSCTL_OUT() returns a non-zero value, stop at once.
2001-02-22 14:44:39 +00:00
Jake Burkholder
d5a08a6065 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
Poul-Henning Kamp
fc2ffbe604 Mechanical change to use <sys/queue.h> macro API instead of
fondling implementation details.

Created with: sed(1)
Reviewed by: md5(1)
2001-02-04 13:13:25 +00:00
Matthew Dillon
4e71e795a1 This commit represents work mainly submitted by Tor and slightly modified
by myself.  It solves a serious vm_map corruption problem that can occur
with the buffer cache when block sizes > 64K are used.  This code has been
heavily tested in -stable but only tested somewhat on -current.  An MFC
will occur in a few days.  My additions include the vm_map_simplify_entry()
and minor buffer cache boundry case fix.

Make the buffer cache use a system map for buffer cache KVM rather then a
normal map.

Ensure that VM objects are not allocated for system maps.  There were cases
where a buffer map could wind up with a backing VM object -- normally
harmless, but this could also result in the buffer cache blocking in places
where it assumes no blocking will occur, possibly resulting in corrupted
maps.

Fix a minor boundry case in the buffer cache size limit is reached that
could result in non-optimal code.

Add vm_map_simplify_entry() calls to prevent 'creeping proliferation'
of vm_map_entry's in the buffer cache's vm_map.  Previously only a simple
linear optimization was made.  (The buffer vm_map typically has only a
handful of vm_map_entry's.  This stabilizes it at that level permanently).

PR: 20609
Submitted by: (Tor Egge) tegge
2001-02-04 06:19:28 +00:00
John Baldwin
45ece682fd - Doh, lock faultin() with proc lock in scheduler().
- Lock p_swtime with sched_lock in scheduler() as well.
2001-01-25 01:38:09 +00:00
Jason Evans
1b367556b5 Convert all simplelocks to mutexes and remove the simplelock implementations. 2001-01-24 12:35:55 +00:00
John Baldwin
69b4045657 Argh, I didn't get this test right when I converted it. Break this up
into two separate if's instead of nested if's.  Also, reorder things
slightly to avoid unnecessary mutex operations.
2001-01-24 12:23:17 +00:00