Commit Graph

36 Commits

Author SHA1 Message Date
jhb
d8b2f99c44 GC #if 0'd assembly mutex micro operations. If someone wants to bring
these back later then can get them from the attic.  Also, GC, some stale
macros to acquire and release sleep mutexes in assembly.
2002-03-28 15:14:23 +00:00
bmilekic
a3cf10660c Make MPLOCKED work again in asm files and stringify it explicitly
where necessary.

Reviewed by: jake
2002-02-28 06:17:05 +00:00
jhb
a3b98398cb Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
julian
5596676e6c KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
phk
b2f9beade9 Properly wrap mtx_intr_enable() macro in "do $bla while (0)" 2001-06-02 08:17:42 +00:00
jhb
4572ff9c78 - Switch from using save/disable/restore_intr to using critical_enter/exit
and change the u_int mtx_saveintr member of struct mtx to a critical_t
  mtx_savecrit.
- On the alpha we no longer need a custom _get_spin_lock() macro to avoid
  an extra PAL call, so remove it.
- Partially fix using mutexes with WITNESS in modules.  Change all the
  _mtx_{un,}lock_{spin,}_flags() macros to accept explicit file and line
  parameters and rename them to use a prefix of two underscores.  Inside
  of kern_mutex.c, generate wrapper functions for
  _mtx_{un,}lock_{spin,}_flags() (only using a prefix of one underscore)
  that are called from modules.  The macros mtx_{un,}lock_{spin,}_flags()
  are mapped to the __mtx_* macros inside of the kernel to inline the
  usual case of mutex operations and map to the internal _mtx_* functions
  in the module case so that modules will use WITNESS and KTR logging if
  the kernel is compiled with support for it.
2001-03-28 02:40:47 +00:00
jhb
f108bc4208 Fix mtx_legal2block. The only time that it is bad to block on a mutex is
if we hold a spin mutex, since we can trivially get into deadlocks if we
start switching out of processes that hold spinlocks.  Checking to see if
interrupts were disabled was a sort of cheap way of doing this since most
of the time interrupts were only disabled when holding a spin lock.  At
least on the i386.  To fix this properly, use a per-process counter
p_spinlocks that counts the number of spin locks currently held, and
instead of checking to see if interrupts are disabled in the witness code,
check to see if we hold any spin locks.  Since child processes always
start up with the sched lock magically held in fork_exit(), we initialize
p_spinlocks to 1 for child processes.  Note that proc0 doesn't go through
fork_exit(), so it starts with no spin locks held.

Consulting from:	cp
2001-03-09 07:24:17 +00:00
jhb
41a6853d24 GC unused and now obsolete assertion macros. 2001-02-22 15:45:49 +00:00
jhb
5f9a24a51c Add a macro mtx_intr_enable() to alter a spin lock such that interrupts
will be enabled when it is released.
2001-02-10 02:15:18 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
jasone
ec55088093 Move most of sys/mutex.h into kern/kern_mutex.c, thereby making the mutex
inline functions non-inlined.  Hide parts of the mutex implementation that
should not be exposed.

Make sure that WITNESS code is not executed during boot until the mutexes
are fully initialized by SI_SUB_MUTEX (the original motivation for this
commit).

Submitted by:	peter
2001-01-21 22:34:43 +00:00
jake
cf5b2e3c3b Simplify the i386 asm MTX_{ENTER,EXIT} macros to just call the
appropriate function, rather than doing a horse-and-buggy
acquire.  They now take the mutex type as an arg and can be
used with sleep as well as spin mutexes.
2001-01-20 04:14:25 +00:00
jhb
ff222c0ba5 Argh, disable the micro-ops again. I didn't test these adequately and
managed to lock up one of my machines in world again.

Pointy-hat to:	me
2001-01-16 04:48:38 +00:00
jhb
307cdc0f64 - Use "+a" instead of "=&a" for several constraints. This should fix
compiling errors where gcc would run out of registers.
- Add "cc" to the list of clobbers for micro-ops where we perform
  instructions that alter %eflags.
- Use xchgl instead of cmpxchgl to release a spin lock.  This could allow
  for more efficient register allocation as we no longer mandate that %eax
  be used.
- Reenable the optimized mutex micro-ops in the non-i386 case.
2001-01-16 03:45:54 +00:00
jhb
8fe01fc4af Revert the previous revision now that atomic_store_rel_ptr() actually
works.
2001-01-14 09:56:35 +00:00
jhb
94588351ce Work around the broken atomic_store_rel_ptr() on the i386 arch by just
using atomic_cmpset_rel_ptr() instead for _release_lock_quick().  When
atomic_store_rel_ptr() is functional and MP safe, then this can be
reverted.
2001-01-14 00:16:17 +00:00
jhb
e1f16f459e Fix the assembly mutex macros to call the appropriate witness functions if
the witness code is compiled in.  Without this, the witness code doesn't
notice that sched_lock is released by fork_trampoline() and thus gets all
confused about spin lock order later on.
2000-12-12 03:49:58 +00:00
jake
66ed1a203c Fix a jump to the wrong label, <sigh>. Put a period at the end of a
sentence in a comment.

Submitted by:	bde
2000-12-08 19:53:37 +00:00
jhb
b9e04ff83c Argh, revert the clobber changes. Since %ecx and %edx aren't call safe,
calling the C functions mtx_enter_hard() and mtx_exit_hard() clobbers them.
Note that %eax is also not call safe, but it is already clobbered due to
cmpxchg.  However, now we are back to not compiling again, so these macros
are still left disabled for now.
2000-12-08 18:21:06 +00:00
jake
601ec8e185 Change the calling conventions of the MTX_ENTER macro to match
that of MTX_EXIT.  Don't assume that the reg parameter to MTX_ENTER
holds curproc, load it explicitly.  Put semi-colons at the end of
the macros to be more consistent and so its harder to forget them
when these change.
2000-12-08 08:49:36 +00:00
jhb
7ee09e9ca2 Well, the previous commit wasn't entirely correct either. For now, just
disable the optimized mutex micro-operations for the non-I386_CPU case
and fall back to the C stubs that call the atomic_foo() inlines.
2000-12-08 05:03:34 +00:00
jhb
e024f021d4 Fix broken register restraints that needlessly clobbered registers %ecx
and %edx resulting in gcc not having enough registers left to work with.
2000-12-07 02:23:16 +00:00
jake
d511c47ae9 (1) Allow a stray lock prefix to be compiled out with the
MPLOCKED macro
(2)	Use decimal 12 rather than hex 0xc in an addl
(3)	Implement MTX_ENTER for the I386_CPU case
(4)	Use semi-colons between instructions to allow MTX_ENTER
	and MTX_ENTER_WITH_RECURSION to be assembled
(5)	Use incl instead of incw to increment the recusion count
(6)	10 is not a valid label, use 7, 8 and 9 rather than 8, 9 and 10
(7)	Sort numeric labels

Submitted by:	bde (2, 4, and 5)
2000-12-04 12:38:03 +00:00
jhb
7e6e41a382 Fix a bug with handling of the saved interrupt state for spin mutexes in
the MTX_EXIT_WITH_RECURSION() assembly macro (currently unused).

Submitted by:	bde
2000-11-13 18:39:18 +00:00
jhb
ab67e1548e Define the mtx_legal2block() macro used in the witness code that managed
to get lost during the MI mutex conversion.

Reported by:    Steve Kargl <sgk@troutmask.apl.washington.edu>
2000-10-20 22:44:06 +00:00
jhb
f671832d76 - Make the mutex code almost completely machine independent. This greatly
reducues the maintenance load for the mutex code.  The only MD portions
  of the mutex code are in machine/mutex.h now, which include the assembly
  macros for handling mutexes as well as optionally overriding the mutex
  micro-operations.  For example, we use optimized micro-ops on the x86
  platform #ifndef I386_CPU.
- Change the behavior of the SMP_DEBUG kernel option.  In the new code,
  mtx_assert() only depends on INVARIANTS, allowing other kernel developers
  to have working mutex assertiions without having to include all of the
  mutex debugging code.  The SMP_DEBUG kernel option has been renamed to
  MUTEX_DEBUG and now just controls extra mutex debugging code.
- Abolish the ugly mtx_f hack.  Instead, we dynamically allocate
  seperate mtx_debug structures on the fly in mtx_init, except for mutexes
  that are initiated very early in the boot process.   These mutexes
  are declared using a special MUTEX_DECLARE() macro, and use a new
  flag MTX_COLD when calling mtx_init.  This is still somewhat hackish,
  but it is less evil than the mtx_f filler struct, and the mtx struct is
  now the same size with and without mutex debugging code.
- Add some micro-micro-operation macros for doing the actual atomic
  operations on the mutex mtx_lock field to make it easier for other archs
  to override/optimize mutex ops if needed.  These new tiny ops also clean
  up the code in some places by replacing long atomic operation function
  calls that spanned 2-3 lines with a short 1-line macro call.
- Don't call mi_switch() from mtx_enter_hard() when we block while trying
  to obtain a sleep mutex.  Calling mi_switch() would bogusly release
  Giant before switching to the next process.  Instead, inline most of the
  code from mi_switch() in the mtx_enter_hard() function.  Note that when
  we finally kill Giant we can back this out and go back to calling
  mi_switch().
2000-10-20 07:26:37 +00:00
jhb
fd275a78bd - Change fast interrupts on x86 to push a full interrupt frame and to
return through doreti to handle ast's.  This is necessary for the
  clock interrupts to work properly.
- Change the clock interrupts on the x86 to be fast instead of threaded.
  This is needed because both hardclock() and statclock() need to run in
  the context of the current process, not in a separate thread context.
- Kill the prevproc hack as it is no longer needed.
- We really need Giant when we call psignal(), but we don't want to block
  during the clock interrupt.  Instead, use two p_flag's in the proc struct
  to mark the current process as having a pending SIGVTALRM or a SIGPROF
  and let them be delivered during ast() when hardclock() has finished
  running.
- Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways.  It was
  broken on the x86 if it was turned on since cpl is gone.  It's only use
  was to bogusly run softclock() directly during hardclock() rather than
  scheduling an SWI.
- Remove the COM_LOCK simplelock and replace it with a clock_lock spin
  mutex.  Since the spin mutex already handles disabling/restoring
  interrupts appropriately, this also lets us axe all the *_intr() fu.
- Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use
  temporary fast interrupts for the APIC trial.
- Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending
  signals in hardclock() that are to be delivered in ast().

Submitted by:	jakeb (making statclock safe in a fast interrupt)
Submitted by:	cp (concept of delaying signals until ast())
2000-10-06 02:20:21 +00:00
jasone
bea51a4aa1 Reduce userland namespace polution. 2000-10-04 01:21:58 +00:00
jhb
b996fd3a9d Fix the assmebly mutex macros to handle saving/restoring interrupt state
properly.  Fix the recursive mutex macros to actually compile.  At the
moment we only use MTX_EXIT anyways.
2000-09-24 23:34:21 +00:00
jasone
27bf3e86c9 #include <sys/proc.h> in order to get curproc. This seems to be the lesser
of two evils; the greater evil is requiring sys/proc.h to be included
before including machine/mutex.h.
2000-09-23 00:00:50 +00:00
jhb
1da56dada3 Teach MTX_EXIT_RECURSE that the recursion count is a 32-bit integer,
not a 16-bit one.
2000-09-22 04:30:33 +00:00
jhb
ebc05310ca Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, just
use struct mtx, struct witness, and struct witness_blessed.

Requested by:	bde
2000-09-14 20:15:16 +00:00
jasone
e454be9f46 Style cleanups. No functional changes. 2000-09-09 23:18:48 +00:00
jasone
9d6c8a5123 Add file and line arguments to WITNESS_ENTER() and WITNESS_EXIT, since
__FILE__ and __LINE__ don't get expanded usefully in inline functions.

Add const to all witness*() arguments that are filenames.
2000-09-09 22:43:22 +00:00
jasone
a54a380435 Rename mtx_enter(), mtx_try_enter(), and mtx_exit() and wrap them with cpp
macros that expand to pass filename and line number information.  This is
necessary since we're using inline functions instead of macros now.

Add const to the filename pointers passed througout the mtx and witness
code.
2000-09-08 21:48:06 +00:00
jasone
769e0f974d Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00