Commit Graph

99 Commits

Author SHA1 Message Date
John Baldwin
15ec816acc - Axe RETIP() as it was very i386 specific and unwieldy. Instead, use the
passed in filename and line number in the KTR tracepoint message.
- Even though it is #if 0'd code, change the code to detect that a process
  is an interrupt thread to check p->p_ithd against NULL rather than
  checking non-existant process flags from BSD/OS.
- Use '%p' to print pointers in KTR log messages instead of assuming
  sizeof(int) == sizeof(void *).
- Don't set p_mtxname to NULL when releasing a mutex.  It doesn't hurt
  to leave it set (we don't clear w_mesg for example) and at least at
  one time in the past, there used to be race conditions in the kernel
  that would result in setting this to NULL causing the kernel to
  dereference NULL.
- Make the _mtx_assert() function be compiled in if INVARIANTS_SUPPORT is
  defined rather than if INVARIANTS is defined so that a KLD compiled
  with INVARIANTS that uses mtx_assert() can be used with a kernel that
  just has INVARIANT_SUPPORT compiled in.
2001-02-24 19:36:13 +00:00
Julian Elischer
33338e7370 Add knowledge of the netgraph spinlocks into the Witness code.
Well, at least I think that's how it's done.
2001-02-24 14:29:47 +00:00
John Baldwin
25d209f260 - Use the NOCPU constant.
- Move the ithread spin locks before sched lock and clk in preparation for
  future commits to the ithread code.
2001-02-22 02:12:54 +00:00
Bosko Milekic
2786342687 Change all instances of CURPROC' and CURTHD' to `curproc,' in order
to stay consistent.

Requested by: bde
2001-02-12 03:15:43 +00:00
Jake Burkholder
d5a08a6065 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
Bosko Milekic
5746a1d866 - Place back STR string declarations for lock/unlock strings used for KTR_LOCK
tracing in order to avoid duplication.
- Insert some tracepoints back into the mutex acq/rel code, thus ensuring
  that we can trace all lock acq/rel's again.
- All CURPROC != NULL checks are MPASS()es (under MUTEX_DEBUG) because they
  signify a serious mutex corruption.
- Change up some KASSERT()s to MPASS()es, and vice-versa, depending on the
  type of problem we're debugging (INVARIANTS is used here to check that
  the API is being used properly whereas MUTEX_DEBUG is used to ensure that
  something general isn't happening that will have bad impact on mutex
  locks).

Reminded by: jhb, jake, asmodai
2001-02-11 02:54:16 +00:00
John Baldwin
c75e5182ce Unify the two sleep lock order lists to enforce the process lock ->
uidinfo lock locking order.
2001-02-09 20:52:02 +00:00
John Baldwin
e910ba59fc - Change the 'witness_list' ddb command to 'show mutexes'. Note that this
will only display sleep mutexes held by the current process.
- Clean up some nits in the witness_display() function and add a ddb
  command 'show witness' that dumps the hierarchy and order lists to the
  console.
- Use queue(3) macros where appropriate.
- Resort the spin lock order list so that "com" is before "sched_lock".
  Also, add appropriate #ifdef's around SMP and i386-specific mutexes.
- Add two new mutexes used to protect the ithread lists and tables to the
  order list.

Requested by:	bde (1)
2001-02-09 15:19:41 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
John Baldwin
d38b8dbfc8 Add a new ddb command 'witness_list' that lists the mutexes held by
curproc.

Requested by:	peter
2001-01-27 07:51:34 +00:00
Jason Evans
1b367556b5 Convert all simplelocks to mutexes and remove the simplelock implementations. 2001-01-24 12:35:55 +00:00
John Baldwin
8484de7555 - Don't use a union and fun tricks to shave one extra pointer off of struct
mtx right now as it makes debugging harder.  When we are in optimizing
  mode, we can revisit this.
- Fix the KTR trace messages to use %p rather than 0x%p to avoid duplicate
  0x's in KTR output.
- During witness_fixup, release Giant so that witness doesn't get confused.
  Also, grab all_mtx while walking the list of mutexes.
- Remove w_sleep and w_recurse.  Instead, perform checks on mutexes using
  the mutex's mtx_flags field.
- Allow debug.witness_ddb and debug.witness_skipspin to be set from the
  loader.
- Add Giant to the front of existing order_list entries to help ensure
  Giant is always first.
- Add an order entry for the various proc locks.  Note that this only
  helps keep proc in order mostly as the allproc and proctree mutexes are
  only obtained during a lockmgr operation on the specified mutex.
2001-01-24 10:57:01 +00:00
Jason Evans
56771ca74b Print correct file name and line number in mtx_assert().
Noticed by:	jake
2001-01-22 05:56:55 +00:00
Jason Evans
0cde2e34af Move most of sys/mutex.h into kern/kern_mutex.c, thereby making the mutex
inline functions non-inlined.  Hide parts of the mutex implementation that
should not be exposed.

Make sure that WITNESS code is not executed during boot until the mutexes
are fully initialized by SI_SUB_MUTEX (the original motivation for this
commit).

Submitted by:	peter
2001-01-21 22:34:43 +00:00
Jason Evans
527c2fd277 Make the order of the static initializer for all_mtx match the order of
fields in struct mtx.

Found by:	jake
2001-01-21 11:05:02 +00:00
Jason Evans
d1c1b8413e Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
Jake Burkholder
c1ef8aac9e - Make npx_intr INTR_MPSAFE and move acquiring Giant into the
function itself.
- Remove a hack to allow acquiring Giant from the npx asm trap
  vector.
2001-01-20 02:30:58 +00:00
Bosko Milekic
08812b3925 Implement MTX_RECURSE flag for mtx_init().
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.

The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.

The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.

Reviewed by: jhb
2001-01-19 01:59:14 +00:00
Jake Burkholder
ef73ae4b0c Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
John Baldwin
562e4ffe86 - Add a new flag MTX_QUIET that can be passed to the various mtx_*
functions.  If this flag is set, then no KTR log messages are issued.
  This is useful for blocking excessive logging, such as with the internal
  mutex used by the witness code.
- Use MTX_QUIET on all of the mtx_enter/exit operations on the internal
  mutex used by the witness code.
- If we are in a panic, don't do witness checks in witness_enter(),
  witness_exit(), and witness_try_enter(), just return.
2000-12-13 21:53:42 +00:00
Jake Burkholder
92cf772d8d - Add code to detect if a system call returns with locks other than Giant
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
  panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
  a system call, which were missing for alpha and ia64.
2000-12-12 01:14:32 +00:00
John Baldwin
428b4b5562 Oops, the witness mutex is a spin lock, so use MTX_SPIN in the call to
mtx_init().  Since the witness code ignores its internal mutex, this
doesn't result in any functional change.
2000-12-12 00:37:18 +00:00
David Malone
7cc0979fd6 Convert more malloc+bzero to malloc+M_ZERO.
Submitted by:	josh@zipperup.org
Submitted by:	Robert Drehmel <robd@gmx.net>
2000-12-08 21:51:06 +00:00
John Baldwin
6936206ebd Split the WITNESS and MUTEX_DEBUG options apart so that WITNESS does not
depend on MUTEX_DEBUG.  The MUTEX_DEBUG option turns on extra assertions
and checks to verify that mutexes themselves are implemented properly.
The WITNESS option uses extra checks and diagnostics to verify that other
code is using mutexes properly.
2000-12-01 00:10:59 +00:00
John Baldwin
1bd0eefb4c Fix up priority propagation:
- Use a better test for determining when a process is running.
- Convert some checks to assertions.
- Remove unnecessary tests.
- Save the priority before acquiring a mutex rather than in msleep(9).
2000-11-30 00:51:16 +00:00
John Baldwin
86327ad8a4 Set p_mtxname when blocking on a mutex and clear it when waking up. 2000-11-29 20:17:15 +00:00
John Baldwin
f404050e44 Use an atomic operation with an appropriate memory barrier when releasing
a contested sleep mutex in the case that at least two processes are blocked
on the contested mutex.
2000-11-29 18:41:19 +00:00
John Baldwin
8f838cb563 The sched_lock mutex goes after the sio mutex in the locking order since
a software interrupt can be scheduled in the sio interrupt handler while
the sio mutex is held.
2000-11-29 18:38:14 +00:00
John Baldwin
bbc7a98a31 Save the line number and filename of the last mtx_enter operation for
spin locks.  We already do this for sleep locks.
2000-11-29 18:37:01 +00:00
Alfred Perlstein
0931dcefb3 Move the #define of _KERN_MUTEX_C_ so that it's before any system headers
are included.  System headers can include sys/mutex.h and then certain
macros do not get defined.

Reviewed by: jake
2000-11-26 21:14:17 +00:00
Jake Burkholder
a5d5c61c12 Add uidinfo hash and uidinfo struct to the witness order list. 2000-11-26 15:05:46 +00:00
Jake Burkholder
fa2fbc3dac - Protect the callout wheel with a separate spin mutex, callout_lock.
- Use the mutex in hardclock to ensure no races between it and
  softclock.
- Make softclock be INTR_MPSAFE and provide a flag,
  CALLOUT_MPSAFE, which specifies that a callout handler does not
  need giant.  There is still no way to set this flag when
  regstering a callout.

Reviewed by:	-smp@, jlemon
2000-11-19 06:02:32 +00:00
Jake Burkholder
7da6f97772 - Split the run queue and sleep queue linkage, so that a process
may block on a mutex while on the sleep queue without corrupting
it.
- Move dropping of Giant to after the acquire of sched_lock.

Tested by:	John Hay <jhay@icomtek.csir.co.za>
		jhb
2000-11-17 18:09:18 +00:00
John Baldwin
20cdcc5b73 Don't release and acquire Giant in mi_switch(). Instead, release and
acquire Giant as needed in functions that call mi_switch().  The releases
need to be done outside of the sched_lock to avoid potential deadlocks
from trying to acquire Giant while interrupts are disabled.

Submitted by:	witness
2000-11-16 02:16:44 +00:00
John Baldwin
9c36c934a1 Include the right headers to get the DDB #define and the db_active variable. 2000-11-15 22:08:16 +00:00
John Baldwin
59f857e4ea Declare the 'witness_spin_check' properly as a per-CPU variable in the
non-SMP case.
2000-11-15 22:02:05 +00:00
John Baldwin
ecbd8e3710 Don't perform witness checks in witness_enter() during a panic. 2000-11-15 22:00:31 +00:00
John Baldwin
0fe4e534b1 Minor whitespace nit in a comment. 2000-11-10 21:21:20 +00:00
John Baldwin
a5a96a1978 - Use MUTEX_DECLARE() and MTX_COLD for the WITNESS code's internal mutex so
it can function before malloc(9) is up and running.
- Add two new options WITNESS_DDB and WITNESS_SKIPSPIN.  If WITNESS_SKIPSPIN
  is enabled, then spin mutexes are ignored by the WITNESS code.  If
  WITNESS_DDB is turned on and DDB is compiled into the kernel, then the
  kernel will drop into DDB when either a lock hierarchy violation occurs
  or mutexes are held when going to sleep.
- Add some new sysctls:
  debug.witness_ddb is a read-write sysctl that corresponds to WITNESS_DDB.
     The kernel option merely changes the default value to on at boot.
  debug.witness_skipspin is a read-only sysctl that one can use to determine
     if the kernel was compiled with WITNESS_SKIPSPIN.
- Wipe out the BSD/OS-specific lock order lists.  We get to build our own
  lists now as we add mutexes to the kernel.
2000-10-27 02:59:30 +00:00
John Baldwin
3127162743 Quite some warnings. 2000-10-25 04:37:54 +00:00
John Baldwin
b67a3e6e85 Propogate the 'const'ness of mutex descriptions to the witness code to
quiet warnings.
2000-10-20 22:45:01 +00:00
John Baldwin
78f0da0373 Actually enable the witness code if the WITNESS kernel option is enabled. 2000-10-20 21:58:11 +00:00
John Baldwin
f5271ebc2f Doh. Fix a 64-bit-ism by using uintptr_t for a temporary lock variable
instead of int.
2000-10-20 20:24:40 +00:00
John Baldwin
36412d79b4 - Make the mutex code almost completely machine independent. This greatly
reducues the maintenance load for the mutex code.  The only MD portions
  of the mutex code are in machine/mutex.h now, which include the assembly
  macros for handling mutexes as well as optionally overriding the mutex
  micro-operations.  For example, we use optimized micro-ops on the x86
  platform #ifndef I386_CPU.
- Change the behavior of the SMP_DEBUG kernel option.  In the new code,
  mtx_assert() only depends on INVARIANTS, allowing other kernel developers
  to have working mutex assertiions without having to include all of the
  mutex debugging code.  The SMP_DEBUG kernel option has been renamed to
  MUTEX_DEBUG and now just controls extra mutex debugging code.
- Abolish the ugly mtx_f hack.  Instead, we dynamically allocate
  seperate mtx_debug structures on the fly in mtx_init, except for mutexes
  that are initiated very early in the boot process.   These mutexes
  are declared using a special MUTEX_DECLARE() macro, and use a new
  flag MTX_COLD when calling mtx_init.  This is still somewhat hackish,
  but it is less evil than the mtx_f filler struct, and the mtx struct is
  now the same size with and without mutex debugging code.
- Add some micro-micro-operation macros for doing the actual atomic
  operations on the mutex mtx_lock field to make it easier for other archs
  to override/optimize mutex ops if needed.  These new tiny ops also clean
  up the code in some places by replacing long atomic operation function
  calls that spanned 2-3 lines with a short 1-line macro call.
- Don't call mi_switch() from mtx_enter_hard() when we block while trying
  to obtain a sleep mutex.  Calling mi_switch() would bogusly release
  Giant before switching to the next process.  Instead, inline most of the
  code from mi_switch() in the mtx_enter_hard() function.  Note that when
  we finally kill Giant we can back this out and go back to calling
  mi_switch().
2000-10-20 07:26:37 +00:00
John Baldwin
606f8eb27a Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, just
use struct mtx, struct witness, and struct witness_blessed.

Requested by:	bde
2000-09-14 20:15:16 +00:00
Jason Evans
5340642a2e Style cleanups. No functional changes. 2000-09-09 23:18:48 +00:00
Jason Evans
46bf3fe5a6 Add file and line arguments to WITNESS_ENTER() and WITNESS_EXIT, since
__FILE__ and __LINE__ don't get expanded usefully in inline functions.

Add const to all witness*() arguments that are filenames.
2000-09-09 22:43:22 +00:00
Jason Evans
12473b76dc Rename mtx_enter(), mtx_try_enter(), and mtx_exit() and wrap them with cpp
macros that expand to pass filename and line number information.  This is
necessary since we're using inline functions instead of macros now.

Add const to the filename pointers passed througout the mtx and witness
code.
2000-09-08 21:48:06 +00:00
Jason Evans
0384fff8c5 Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00