Commit Graph

3323 Commits

Author SHA1 Message Date
jhb
7cc6cf3ca7 Split the WITNESS and MUTEX_DEBUG options apart so that WITNESS does not
depend on MUTEX_DEBUG.  The MUTEX_DEBUG option turns on extra assertions
and checks to verify that mutexes themselves are implemented properly.
The WITNESS option uses extra checks and diagnostics to verify that other
code is using mutexes properly.
2000-12-01 00:10:59 +00:00
rwatson
0c241b4642 o Add a comment to exec_check_permissions() to indicate that the
passed vnode must be locked; this is the case because of calls
  to VOP_GETATTR(), VOP_ACCESS(), and VOP_OPEN().  This becomes
  more of an issue when VOP_ACCESS() gets a bit more complicated,
  which it does when you introduce ACL, Capability, and MAC
  support.

Obtained from:	TrustedBSD Project
2000-11-30 21:06:05 +00:00
alfred
6b43ec0394 only call bwillwrite() to stall on IO when dealing with VNODEs otherwise
we will stall on non-disk IO for things like fifos and sockets
2000-11-30 20:23:14 +00:00
alfred
69ecf492a6 This is a fix for a problem described in PR kern/19572. It was
recently discussed at -hackers. The problem is a null-pointer
    dereference that happens in kern/vfs_lookup.c when accessing ".."
    with a v_mount entry for the current directory vnode of NULL. This
    happens when a volume is forcibly unmounted, and the vnode for a
    working directory in the mounted volume is cleared.

PR: 23191
Submitted by: Thomas Moestl <tmoestl@gmx.net>
2000-11-30 20:04:44 +00:00
alfred
259d62f4d8 use a oppurtunistic locking strategy with the uidinfo structures to avoid
locking the global hash on each uifree()

make struct uidinfo only visible to the kernel

make uihold() a function rather than a macro to reduce bloat

swap the order of a spl/mutex to maintain consistancy
2000-11-30 19:15:22 +00:00
alfred
40b5fc7a7c make crfree into a function rather than a macro to avoid bloat because of
the mutex aquire/release

reorder struct ucred
2000-11-30 19:09:48 +00:00
mckusick
4a644ba189 Get rid of a bogus mtx_exit (it was attempting to release an
already released mutex).

Submitted by:	"Chris Knight" <chris@aims.com.au>
2000-11-30 19:09:29 +00:00
marcel
75c76bdc6b Don't use p->p_sigstk.ss_flags to keep state of whether the
process is on the alternate stack or not. For compatibility
with sigstack(2) state is being updated if such is needed.

We now determine whether the process is on the alternate
stack by looking at its stack pointer. This allows a process
to siglongjmp from a signal handler on the alternate stack
to the place of the sigsetjmp on the normal stack. When
maintaining state, this would have invalidated the state
information and causing a subsequent signal to be delivered
on the normal stack instead of the alternate stack.

PR: 22286
2000-11-30 05:23:49 +00:00
jhb
ab556afac4 Fix up priority propagation:
- Use a better test for determining when a process is running.
- Convert some checks to assertions.
- Remove unnecessary tests.
- Save the priority before acquiring a mutex rather than in msleep(9).
2000-11-30 00:51:16 +00:00
jhb
e96d454c32 Set p_mtxname when blocking on a mutex and clear it when waking up. 2000-11-29 20:17:15 +00:00
jhb
e02d1f01f5 Save a copy of p_mtxname in e_mtxname when creating an eproc. 2000-11-29 20:14:50 +00:00
jhb
0640108ef5 Use an atomic operation with an appropriate memory barrier when releasing
a contested sleep mutex in the case that at least two processes are blocked
on the contested mutex.
2000-11-29 18:41:19 +00:00
jhb
42af0b6bb4 The sched_lock mutex goes after the sio mutex in the locking order since
a software interrupt can be scheduled in the sio interrupt handler while
the sio mutex is held.
2000-11-29 18:38:14 +00:00
jhb
35b8911585 Save the line number and filename of the last mtx_enter operation for
spin locks.  We already do this for sleep locks.
2000-11-29 18:37:01 +00:00
jhb
896b5da519 Don't drop Giant and the passed in mutex incorrectly in the
cold || panicstr case.  Do drop the passed in mutex in that case if
PDROP is specified.
2000-11-29 18:32:50 +00:00
jhb
39cf580768 Only print out APIC info on an SMP system during a panic if APIC_IO is
defined.
2000-11-29 01:33:15 +00:00
jhb
1b12cb1ecb Don't wait forever for CPUs to stop or restart. Instead, give up after a
timeout.  If DIAGNOSTIC is turned on, then display a message to the console
with a map of which CPUs failed to stop or restart.  This gives an SMP box
at least a fighting chance of getting into DDB if one of the other CPUs has
interrupts disabled.
2000-11-28 23:52:36 +00:00
jkh
8c1ad59399 Kernel support for erase2 character.
Submitted by:	Rui Pedro Mendes Salgueiro <rps@mat.uc.pt>
2000-11-28 20:03:23 +00:00
mdodd
c18bb8265f Alter the return value and arguments of the GET_RESOURCE_LIST bus method.
Alter consumers of this method to conform to the new convention.
Minor cosmetic adjustments to bus.h.

This isn't of concern as this interface isn't in use yet.
2000-11-28 06:49:15 +00:00
jake
9326f655fc Use callout_reset instead of timeout(9). Most callouts are statically
allocated, 2 have been added to struct proc for setitimer and sleep.

Reviewed by:	jhb, jlemon
2000-11-27 22:52:31 +00:00
jhb
79bed97f3e Drop Giant around the mi_switch() call in yield().
Submitted by:	tegge
2000-11-27 18:48:13 +00:00
alfred
614730a418 ucred system overhaul:
1) mpsafe (protect the refcount with a mutex).
2) reduce duplicated code by removing the inlined crdup() from crcopy()
   and make crcopy() call crdup().
3) use M_ZERO flag when allocating initial structs instead of calling bzero
   after allocation.
4) expand the size of the refcount from a u_short to an u_int, by using
   shorts we might have an overflow.

Glanced at by: jake
2000-11-27 00:09:16 +00:00
alfred
011c33c2f9 Move the #define of _KERN_MUTEX_C_ so that it's before any system headers
are included.  System headers can include sys/mutex.h and then certain
macros do not get defined.

Reviewed by: jake
2000-11-26 21:14:17 +00:00
phk
7101ba5caa Simplify the tprintf() API.
Loose the special <sys/tprintf.h> #include file.
2000-11-26 20:35:21 +00:00
phk
7c4763bbdd Make log(-1, ...) do what addlog(...) did.
Replace all uses of addlog(...) with log(-1, ...)

Remove bogus "register" keywords in subr_prf.c

Make log() return void.
2000-11-26 19:34:06 +00:00
phk
2ad308fd87 Make diskerr() always log with printf. 2000-11-26 19:29:15 +00:00
jake
089f02872c Add uidinfo hash and uidinfo struct to the witness order list. 2000-11-26 15:05:46 +00:00
alfred
fe53724a0f Make uidinfo subsystem mpsafe
use a mutex lock when looking up/deleting entries on the hashlist
use a mutex lock on each uidinfo when updating fields

make uifree() a void function rather than 'int' since no one cares

allocate uidinfo structs with the M_ZERO flag and don't explicitly initialize
them

Assisted by: eivind, jhb, jakeb
2000-11-26 12:08:17 +00:00
jlemon
7f57729d27 Revert the last commit to the callout interface, and add a flag to
callout_init() indicating whether the callout is safe or not.  Update
the callers of callout_init() to reflect the new interface.

Okayed by: Jake
2000-11-25 06:22:16 +00:00
jake
a177d57cb0 - Rename callout_reset to _callout_reset and add a flags argument.
- Add macros callout_reset, which does the obvious, and
  mp_callout_reset, which passes the CALLOUT_MPSAFE flag.
2000-11-25 03:34:49 +00:00
jake
0c0be4e826 Protect the following with a lockmgr lock:
allproc
	zombproc
	pidhashtbl
	proc.p_list
	proc.p_hash
	nextpid

Reviewed by:	jhb
Obtained from:	BSD/OS and netbsd
2000-11-22 07:42:04 +00:00
jhb
f541f76a94 Ahem, fix the disclaimer portion of the copyright so it disclaim's the
voices in my head.  You can sue the voices in Bill Paul's head all you
want.

Noticed by:	jhb
2000-11-21 21:10:15 +00:00
jlemon
71439be003 Protect p_wchan with sched_lock in selwakeup(). 2000-11-21 20:22:34 +00:00
alc
dfa19cb0ce Provide a new interface for the user of aio_read() and aio_write() to request
a kevent upon completion of the I/O.  Specifically, introduce a new type
of sigevent notification, SIGEV_EVENT.  If sigev_notify is SIGEV_EVENT,
then sigev_notify_kqueue names the kqueue that should receive the event
and sigev_value contains the "void *" is copied into the kevent's udata
field.

In contrast to the existing interface, this one: 1) works on
the Alpha 2) avoids the extra copyin() call for the kevent because all
of the information needed is in the sigevent and 3) could be
applied to request a single kevent upon completion of an entire lio_listio().

Reviewed by:	jlemon
2000-11-21 19:36:36 +00:00
alfred
5870a97ec2 Accept filters broke kernels compiled without options INET.
Make accept filters conditional on INET support to fix.

Pointed out by: bde
Tested and assisted by: Stephen J. Kiernan <sab@vegamuse.org>
2000-11-20 01:35:25 +00:00
rwatson
ee2707f1c2 o Export cp_time ("CPU time statistics") using SYSCTL_OPAQUE.
This removes a reason that systat requires setgid kmem.  More to
  come.
2000-11-20 00:44:58 +00:00
rwatson
7302881265 o Export nchstats ("VFS cache effectiveness statistics") using
SYSCTL_OPAQUE.  This removes a reason that systat requires
  setgid kmem.  More to come.
2000-11-20 00:41:11 +00:00
dwmalone
0d441a8a27 Make sbcompress use the new M_WRITABLE macro. Previously sbcompress
could not compress into clusters. This could result in lots of
wasted clusters while recieving small packets from an interface
that uses clusters for all it's packets.

Patch is partially from BSDi (limiting the size of the copy) and
based on a patch for 4.1 by Ian Dowse <iedowse@maths.tcd.ie> and
myself.

Reviewed by:	bmilekic
Obtained From:	BSDi
Submitted by:	iedowse
2000-11-19 22:22:47 +00:00
jake
f265931038 - Protect the callout wheel with a separate spin mutex, callout_lock.
- Use the mutex in hardclock to ensure no races between it and
  softclock.
- Make softclock be INTR_MPSAFE and provide a flag,
  CALLOUT_MPSAFE, which specifies that a callout handler does not
  need giant.  There is still no way to set this flag when
  regstering a callout.

Reviewed by:	-smp@, jlemon
2000-11-19 06:02:32 +00:00
dillon
2ace352085 Implement a low-memory deadlock solution.
Removed most of the hacks that were trying to deal with low-memory
    situations prior to now.

    The new code is based on the concept that I/O must be able to function in
    a low memory situation.  All major modules related to I/O (except
    networking) have been adjusted to allow allocation out of the system
    reserve memory pool.  These modules now detect a low memory situation but
    rather then block they instead continue to operate, then return resources
    to the memory pool instead of cache them or leave them wired.

    Code has been added to stall in a low-memory situation prior to a vnode
    being locked.

    Thus situations where a process blocks in a low-memory condition while
    holding a locked vnode have been reduced to near nothing.  Not only will
    I/O continue to operate, but many prior deadlock conditions simply no
    longer exist.

Implement a number of VFS/BIO fixes

	(found by Ian): in biodone(), bogus-page replacement code, the loop
        was not properly incrementing loop variables prior to a continue
        statement.  We do not believe this code can be hit anyway but we
        aren't taking any chances.  We'll turn the whole section into a
        panic (as it already is in brelse()) after the release is rolled.

	In biodone(), the foff calculation was incorrectly
        clamped to the iosize, causing the wrong foff to be calculated
        for pages in the case of an I/O error or biodone() called without
        initiating I/O.  The problem always caused a panic before.  Now it
        doesn't.  The problem is mainly an issue with NFS.

	Fixed casts for ~PAGE_MASK.  This code worked properly before only
        because the calculations use signed arithmatic.  Better to properly
        extend PAGE_MASK first before inverting it for the 64 bit masking
        op.

	In brelse(), the bogus_page fixup code was improperly throwing
        away the original contents of 'm' when it did the j-loop to
        fix the bogus pages.  The result was that it would potentially
        invalidate parts of the *WRONG* page(!), leading to corruption.

	There may still be cases where a background bitmap write is
        being duplicated, causing potential corruption.  We have identified
        a potentially serious bug related to this but the fix is still TBD.
        So instead this patch contains a KASSERT to detect the problem
  	and panic the machine rather then continue to corrupt the filesystem.
	The problem does not occur very often..  it is very hard to
	reproduce, and it may or may not be the cause of the corruption
	people have reported.

Review by: (VFS/BIO: mckusick, Ian Dowse <iedowse@maths.tcd.ie>)
Testing by: (VM/Deadlock) Paul Saab <ps@yahoo-inc.com>
2000-11-18 23:06:26 +00:00
dillon
15a44d16ca This patchset fixes a large number of file descriptor race conditions.
Pre-rfork code assumed inherent locking of a process's file descriptor
    array.  However, with the advent of rfork() the file descriptor table
    could be shared between processes.  This patch closes over a dozen
    serious race conditions related to one thread manipulating the table
    (e.g. closing or dup()ing a descriptor) while another is blocked in
    an open(), close(), fcntl(), read(), write(), etc...

PR: kern/11629
Discussed with: Alexander Viro <viro@math.psu.edu>
2000-11-18 21:01:04 +00:00
jhb
abf0152035 Release sched_lock very briefly to give interrupts a chance to fire if we
are in softclock() for a long time.  The old code already did an
splx()/slphigh() pair here, I just missed adding in the equivalent mutex
operations on sched_lock earlier.
2000-11-18 00:21:00 +00:00
tegge
b45236a982 Don't attempt to cluster write buffers where the VMIO flag isn't set. 2000-11-17 23:40:08 +00:00
jake
3a97b3e213 - Split the run queue and sleep queue linkage, so that a process
may block on a mutex while on the sleep queue without corrupting
it.
- Move dropping of Giant to after the acquire of sched_lock.

Tested by:	John Hay <jhay@icomtek.csir.co.za>
		jhb
2000-11-17 18:09:18 +00:00
jhb
c87c13134f The recent changes to msleep() and mawait() resulted in timeout() and
untimeout() not being called with Giant in those functions.  For now,
use the sched_lock to protect the callout wheel in softclock() and in
the various timeout and callout functions.

Noticed by:	tegge
2000-11-16 21:20:52 +00:00
jhb
c0bba69cbe Don't release and acquire Giant in mi_switch(). Instead, release and
acquire Giant as needed in functions that call mi_switch().  The releases
need to be done outside of the sched_lock to avoid potential deadlocks
from trying to acquire Giant while interrupts are disabled.

Submitted by:	witness
2000-11-16 02:16:44 +00:00
jhb
5fa45f43dd Argh, add in a missing release of the sched_lock. 2000-11-16 01:16:54 +00:00
jhb
8b193931b5 CURSIG() calls functions that acquire sleep mutexes, so it is not a good
idea to be holding the sched_lock while we are calling it.  As such,
release sched_lock before calling CURSIG() in msleep() and mawait() and
reacquire it after CURSIG() returns.

Submitted by:	witness
2000-11-16 01:07:19 +00:00
jhb
de636b04e8 - Rename await() to mawait(). mawait() is to await() as msleep() is to
tsleep().  Namely, mawait() takes an extra argument which is a mutex
  to drop when going to sleep.  Just as with msleep(), if the priority
  argument includes the PDROP flag, then the mutex will be dropped and will
  not be reacquired when the process wakes up.
- Add in a backwards compatible macro await() that passes in NULL as the
  mutex argument to mawait().
2000-11-15 22:39:35 +00:00
jhb
0efbfa0260 - Replace a KASSERT() that knew too much about mutex internals with a
mtx_assert() that ensures the mutex we release during msleep() is both
  not recursed and owned by the current process.
2000-11-15 22:30:48 +00:00