Commit Graph

107 Commits

Author SHA1 Message Date
Julian Elischer
71fad9fdee Completely redo thread states.
Reviewed by:	davidxu@freebsd.org
2002-09-11 08:13:56 +00:00
John Baldwin
0d975d6341 Add some KASSERT()'s to ensure that we don't perform spin mutex ops on
sleep mutexes and vice versa.  WITNESS normally should catch this but
not everyone uses WITNESS so this is a fallback to catch nasty but easy
to do bugs.
2002-09-03 18:25:16 +00:00
Ian Dowse
02bd1bcd2a Add a new KTR type KTR_CONTENTION, and use it in the mutex code to
log the start and end of periods during which mtx_lock() is waiting
to acquire a sleep mutex. The log message includes the file and
line of both the waiter and the holder.

Reviewed by:	jhb, jake
2002-08-26 18:39:38 +00:00
John Baldwin
ce39e722ec Disable optimization of spinlocks on UP kernels w/o debugging for now
since it breaks mtx_owned() on spin mutexes when used outside of
mtx_assert().  Unfortunately we currently use it in the i386 MD code
and in the sio(4) driver.

Reported by:	bde
2002-07-27 16:54:23 +00:00
Dag-Erling Smørgrav
b61860ad2d Add mtx_ prefixes to the fields used for mutex profiling, and fix a bug
where the profiling code would report the release point instead of the
acquisition point.

Requested by:	bde
2002-07-03 01:50:27 +00:00
Julian Elischer
e602ba25fd Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by:	Almost everyone who counts
	(at various times, peter, jhb, matt, alfred, mini, bernd,
	and a cast of thousands)

	NOTE: this is still Beta code, and contains lots of debugging stuff.
	expect slight instability in signals..
2002-06-29 17:26:22 +00:00
John Baldwin
6a95e08f2f Replace thread_runnable() with thread_running() as the latter is more
accurate.

Suggested by:	julian
2002-06-04 22:36:24 +00:00
John Baldwin
7fcca6096f Optimize the adaptive mutex spin a bit. Use a simple while loop with
simple reads (and on IA32, a "pause" instruction for each interation of the
loop) to spin until either the mutex owner field changes, or the lock owner
stops executing.

Suggested by:	tanimura
Tested on:	i386
2002-06-04 21:53:48 +00:00
John Baldwin
5853d37d3b Add a private thread_runnable() macro to make the code more readable and
make the KSE diff easier to maintain.
2002-06-04 21:50:02 +00:00
Dag-Erling Smørgrav
db586c8b7c Make the counters uintmax_ts, and use %ju rather than %llu. 2002-05-23 03:08:42 +00:00
John Baldwin
6b8c698908 Rename pause() to ia32_pause() so it doesn't conflict with the pause()
function defined in <unistd.h>.  I didn't #ifdef _KERNEL it because the
mutex implementation in libpthread will probably need this.
2002-05-22 20:32:39 +00:00
John Baldwin
0228ea4e0b Rename cpu_pause() to pause(). Originally I was going to make this an
MI API with empty cpu_pause() functions on other arch's, but this
functionality is definitely unique to IA-32, so I decided to leave it
as i386-only and wrap it in #ifdef's.  I should have dropped the cpu_
prefix when I made that decision.

Requested by:	bde
2002-05-22 13:19:22 +00:00
John Baldwin
703fc290fb Add appropriate IA32 "pause" instructions to improve performanec on
Pentium 4's and newer IA32 processors.  The "pause" instruction has been
verified by Intel to be a NOP on all currently existing IA32 processors
prior to the Pentium 4.
2002-05-21 22:26:35 +00:00
John Baldwin
0e54ddadd9 Fix an old cut 'n' paste bug inherited from BSD/OS: don't increment 'i'
twice once we are in the long wait stage of spinning on a spin mutex.
2002-05-21 21:27:05 +00:00
John Baldwin
e6302957fe Whitespace fixup, properly indent the body of an else clause. 2002-05-21 21:13:27 +00:00
John Baldwin
2498cf8c42 Add code to make default mutexes adaptive if the ADAPTIVE_MUTEXES kernel
option is used (not on by default).

- In the case of trying to lock a mutex, if the MTX_CONTESTED flag is set,
  then we can safely read the thread pointer from the mtx_lock member while
  holding sched_lock.  We then examine the thread to see if it is currently
  executing on another CPU.  If it is, then we keep looping instead of
  blocking.
- In the case of trying to unlock a mutex, it is now possible for a mutex
  to have MTX_CONTESTED set in mtx_lock but to not have any threads
  actually blocked on it, so we need to handle that case.  In that case,
  we just release the lock as if MTX_CONTESTED was not set and return.
- We do not adaptively spin on Giant as Giant is held for long times and
  it slows SMP systems down to a crawl (it was taking several minutes,
  like 5-10 or so for my test alpha and sparc64 SMP boxes to boot up when
  they adaptively spinned on Giant).
- We only compile in the code to do this for SMP kernels, it doesn't make
  sense for UP kernels.

Tested on:	i386, alpha, sparc64
2002-05-21 20:47:11 +00:00
John Baldwin
e8fdcfb57a Optimize spin mutexes for UP kernels without debugging to just enter and
exit critical sections.  We only contest on a spin mutex on an SMP kernel
running on an SMP machine.
2002-05-21 20:34:28 +00:00
John Baldwin
0c88508a78 Change mtx_init() to now take an extra argument. The third argument is
the generic lock type for use with witness.  If this argument is NULL then
the lock name is used as the lock type.  Add a macro for a lock type name
for network driver locks.
2002-04-04 20:52:27 +00:00
Dag-Erling Smørgrav
e633070431 Revert to open hashing. It makes the code simpler, and works farily well
even when the number of records approaches the size of the hash table.
Besides, the previous implementation (using linear probing) was broken :)

Also, use the newly introduced MTX_SYSINIT.
2002-04-02 23:26:32 +00:00
John Baldwin
c53c013bae - Move the MI mutexes sched_lock and Giant from being declared in the
various machdep.c's to being declared in kern_mutex.c.
- Add a new function mutex_init() used to perform early initialization
  needed for mutexes such as setting up thread0's contested lock list
  and initializing MI mutexes.  Change the various MD startup routines
  to call this function instead of duplicating all the code themselves.

Tested on:	alpha, i386
2002-04-02 22:19:16 +00:00
John Baldwin
7feefcd6ce Spelling police. 2002-04-02 20:44:30 +00:00
Andrew R. Reiter
c27b56999e - Add MTX_SYSINIT and SX_SYSINIT as macro glue for allowing sx and mtx
locks to be able to setup a SYSINIT call.  This helps in places where
  a lock is needed to protect some data, but the data is not truly
  associated with a subsystem that can properly initialize it's lock.
  The macros use the mtx_sysinit() and sx_sysinit() functions,
  respectively, as the handler argument to SYSINIT().

Reviewed by: alfred, jhb, smp@
2002-04-02 16:05:43 +00:00
Dag-Erling Smørgrav
b784ffe91a Instead of get_cyclecount(9), use nanotime(9) to record acquisition and
release times.  Measurements are made and stored in nanoseconds but
presented in microseconds, which should be sufficient for the locks for
which we actually want this (those that are held long and / or often).
Also, rename some variables and structure members to unit-agnostic names.
2002-04-02 14:42:01 +00:00
Dag-Erling Smørgrav
6c35e80948 Mutex profiling code, conditional on the MUTEX_PROFILING option. Adds the
following sysctl variables:

  debug.mutex.prof.enable	    enable / disable profiling
  debug.mutex.prof.acquisitions	    number of mutex acquisitions recorded
  debug.mutex.prof.records	    number of acquisition points recorded
  debug.mutex.prof.maxrecords	    max number of acquisition points
  debug.mutex.prof.rejected	    number of rejections (due to full table)
  debug.mutex.prof.hashsize	    hash size
  debug.mutex.prof.collisions	    number of hash collisions
  debug.mutex.prof.stats	    profiling statistics

The code records four numbers for each acquisition point (identified by
source file name and line number): longest time held, total time held,
number of non-recursive acquisitions, average time held.  The measurements
are in clock cycles (as returned by get_cyclecount(9)); this may cause
measurements on some SMP systems to be unreliable.  This can probably be
worked around by replacing get_cyclecount(9) by some incarnation of
nanotime(9).

This work was derived from initial patches by eivind.
2002-04-02 00:01:49 +00:00
Jeff Roberson
f22a4b62f5 Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locks
with this flag.  Remove the dup_list and dup_ok code from subr_witness.  Now
we just check for the flag instead of doing string compares.

Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface.  The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.

Approved by:	jhb
2002-03-27 09:23:41 +00:00
Alfred Perlstein
4d77a549fe Remove __P. 2002-03-19 21:25:46 +00:00
Peter Wemm
114730b0a8 Tidy up some unused variables 2002-02-20 21:25:44 +00:00
Matthew Dillon
735da6de88 Add kern_giant_ucred to instrument Giant around ucred related operations
such a getgid(), setgid(), etc...
2002-02-18 17:51:47 +00:00
Julian Elischer
2c1007663f In a threaded world, differnt priorirites become properties of
different entities.  Make it so.

Reviewed by:	jhb@freebsd.org (john baldwin)
2002-02-11 20:37:54 +00:00
John Baldwin
18fc2ba9ff Use the mtx_owner() macro in one spot in _mtx_lock_sleep() to make the
code easier to read.
2002-02-09 00:12:53 +00:00
John Baldwin
bf07c922ac Bump the limits for determining if we've held a spinlock too long as they
seem to be too short for the 500 Mhz DS20 I'm testing on.  The rather
arbitrary numbers are rather bogus anyways.  We should probably have
variables for these limits that are calibrated in the MD startup code
somehow.
2002-01-15 14:20:33 +00:00
John Baldwin
c86b6ff551 Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:

The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe.  Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer.  This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs.  Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called.  (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)

I've tested the changes to the interrupt code on i386 and alpha.  I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine.  PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken.  Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.

Reviewed by:	peter
Tested on:	i386, alpha
2002-01-05 08:47:13 +00:00
John Baldwin
7e1f6dfe9d Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
John Baldwin
ba48b69a13 Remove definition of witness and comment stating that this file implements
witness.  Witness moved off to subr_witness.c a while ago.
2001-11-15 19:08:55 +00:00
Matthew Dillon
d23f5958bc Add mtx_lock_giant() and mtx_unlock_giant() wrappers for sysctl management
of Giant during the Giant unwinding phase, and start work on instrumenting
Giant for the file and proc mutexes.

These wrappers allow developers to turn on and off Giant around various
subsystems.  DEVELOPERS SHOULD NEVER TURN OFF GIANT AROUND A SUBSYSTEM JUST
BECAUSE THE SYSCTL EXISTS!  General developers should only considering
turning on Giant for a subsystem whos default is off (to help track down
bugs).  Only developers working on particular subsystems who know what
they are doing should consider turning off Giant.

These wrappers will greatly improve our ability to unwind Giant and test
the kernel on a (mostly) subsystem by subsystem basis.   They allow Giant
unwinding developers (GUDs) to emplace appropriate subsystem and structural
mutexes in the main tree and then request that the larger community test
the work by turning off Giant around the subsystem(s), without the larger
community having to mess around with patches.  These wrappers also allow
GUDs to boot into a (more likely to be) working system in the midst of
their unwinding work and to test that work under more controlled
circumstances.

There is a master sysctl, kern.giant.all, which defaults to 0 (off).  If
turned on it overrides *ALL* other kern.giant sysctls and forces Giant to
be turned on for all wrapped subsystems.  If turned off then Giant around
individual subsystems are controlled by various other kern.giant.XXX sysctls.

Code which overlaps multiple subsystems must have all related subsystem Giant
sysctls turned off in order to run without Giant.
2001-10-26 20:48:04 +00:00
John Baldwin
7ada587697 The mtx_init() and sx_init() functions bzero'd locks before handing them
off to witness_init() making the check for double intializating a lock by
testing the LO_INITIALIZED flag moot.  Workaround this by checking the
LO_INITIALIZED flag ourself before we bzero the lock structure.
2001-10-20 01:22:42 +00:00
John Baldwin
21377ce065 Remove superflous parens after de-macroizing. 2001-09-26 00:05:18 +00:00
John Baldwin
dde96c9933 Since we no longer inline any debugging code in the mutex operations, move
all the debugging code into the function versions of the mutex operations
in kern_mutex.c.  This reduced the __mtx_* macros to simply wrappers of
the _{get,rel}_lock_* macros, so the __mtx_* macros were also abolished in
favor of just calling the _{get,rel}_lock_* macros.  The tangled hairy mass
of macros calling macros is at least a bit more sane now.
2001-09-22 21:19:55 +00:00
John Baldwin
a44f918bf9 Fix a bug in propagate priority: the kse group pointer wasn't being
updated in the loop so the new thread always seemd to have the same
priority as the original thread and no actual priorities were changed.
2001-09-19 22:52:59 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Bosko Milekic
76dcbd6f9f Force a commit on kern_mutex.c to explain reason for last commit but while
I'm at it also add a comment in mtx_validate() explaining the purpose
of the last change.

Basically, this fixes booting kernels compiled with MUTEX_DEBUG. What used
to happen is before we setidt from init386() [still using BTX idt], we
called mtx_init() on several mutex locks, notably Giant and some others.
This is a problem for MUTEX_DEBUG because it enables mtx_validate() which
calls kernacc(), some of which in turn requires Giant.
Fix by calling kernacc() from mtx_validate() only if (!cold).
2001-08-24 23:00:59 +00:00
Bosko Milekic
ab07087e16 *** empty log message *** 2001-08-24 22:53:45 +00:00
John Baldwin
5cb0fbe47e If we have already panic'd then don't bother enforcing mutex asserts as
things are pretty much shot already and all panic'ing does is hurt our
chances of getting a dump.

Inspired by:	sheldonh
2001-07-31 17:45:50 +00:00
John Baldwin
c4f7a18726 Count the context switch when blocking on a mutex as a voluntary context
switch.  Count the context switch when preempting the current thread to let
a higher priority thread blocked on a mutex we just released run as an
involuntary context switch.

Reported by:	bde
2001-06-25 18:29:32 +00:00
John Baldwin
2d96f0b145 - Move state about lock objects out of struct lock_object and into a new
struct lock_instance that is stored in the per-process and per-CPU lock
  lists.  Previously, the lock lists just kept a pointer to each lock held.
  That pointer is now replaced by a lock instance which contains a pointer
  to the lock object, the file and line of the last acquisition of a lock,
  and various flags about a lock including its recursion count.
- If we sleep while holding a sleepable lock, then mark that lock instance
  as having slept and ignore any lock order violations that occur while
  acquiring Giant when we wake up with slept locks.  This is ok because of
  Giant's special nature.
- Allow witness to differentiate between shared and exclusive locks and
  unlocks of a lock.  Witness will now detect the case when a lock is
  acquired first in one mode and then in another.  Mutexes are always
  locked and unlocked exclusively.  Witness will also now detect the case
  where a process attempts to unlock a shared lock while holding an
  exclusive lock and vice versa.
- Fix a bug in the lock list implementation where we used the wrong
  constant to detect the case where a lock list entry was full.
2001-05-04 17:15:16 +00:00
Mark Murray
fb919e4d5a Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
John Baldwin
7141f2ad46 Exit and re-enter the critical section while spinning for a spinlock so
that interrupts can come in while we are waiting for a lock.
2001-04-17 03:34:52 +00:00
Mark Murray
f0b60d7560 Handle a rare but fatal race invoked sometimes when SIGSTOP is
invoked.
2001-04-13 09:29:34 +00:00
John Baldwin
192846463a Rework the witness code to work with sx locks as well as mutexes.
- Introduce lock classes and lock objects.  Each lock class specifies a
  name and set of flags (or properties) shared by all locks of a given
  type.  Currently there are three lock classes: spin mutexes, sleep
  mutexes, and sx locks.  A lock object specifies properties of an
  additional lock along with a lock name and all of the extra stuff needed
  to make witness work with a given lock.  This abstract lock stuff is
  defined in sys/lock.h.  The lockmgr constants, types, and prototypes have
  been moved to sys/lockmgr.h.  For temporary backwards compatability,
  sys/lock.h includes sys/lockmgr.h.
- Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin
  locks held.  By making this per-cpu, we do not have to jump through
  magic hoops to deal with sched_lock changing ownership during context
  switches.
- Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with
  proc->p_sleeplocks, which is a list of held sleep locks including sleep
  mutexes and sx locks.
- Add helper macros for logging lock events via the KTR_LOCK KTR logging
  level so that the log messages are consistent.
- Add some new flags that can be passed to mtx_init():
  - MTX_NOWITNESS - specifies that this lock should be ignored by witness.
    This is used for the mutex that blocks a sx lock for example.
  - MTX_QUIET - this is not new, but you can pass this to mtx_init() now
    and no events will be logged for this lock, so that one doesn't have
    to change all the individual mtx_lock/unlock() operations.
- All lock objects maintain an initialized flag.  Use this flag to export
  a mtx_initialized() macro that can be safely called from drivers.  Also,
  we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness
  performs the corresponding checks using the initialized flag.
- The lock order reversal messages have been improved to output slightly
  more accurate file and line numbers.
2001-03-28 09:03:24 +00:00
John Baldwin
6283b7d01b - Switch from using save/disable/restore_intr to using critical_enter/exit
and change the u_int mtx_saveintr member of struct mtx to a critical_t
  mtx_savecrit.
- On the alpha we no longer need a custom _get_spin_lock() macro to avoid
  an extra PAL call, so remove it.
- Partially fix using mutexes with WITNESS in modules.  Change all the
  _mtx_{un,}lock_{spin,}_flags() macros to accept explicit file and line
  parameters and rename them to use a prefix of two underscores.  Inside
  of kern_mutex.c, generate wrapper functions for
  _mtx_{un,}lock_{spin,}_flags() (only using a prefix of one underscore)
  that are called from modules.  The macros mtx_{un,}lock_{spin,}_flags()
  are mapped to the __mtx_* macros inside of the kernel to inline the
  usual case of mutex operations and map to the internal _mtx_* functions
  in the module case so that modules will use WITNESS and KTR logging if
  the kernel is compiled with support for it.
2001-03-28 02:40:47 +00:00