Commit Graph

147 Commits

Author SHA1 Message Date
jake
90cd6bd429 Implement mwakeup, mwakeup_one, cv_signal_drop and cv_broadcast_drop.
These take an additional mutex argument, which is dropped before any
processes are made runnable.  This can avoid contention on the mutex
if the processes would immediately acquire it, and is done in such a
way that wakeups will not be lost.

Reviewed by:	jhb
2001-07-04 00:32:50 +00:00
jhb
12ff847457 Remove commented-out garbage that skipped updating schedcpu() stats for
ithreads in SWAIT.
2001-07-03 08:03:56 +00:00
jhb
f145c527cb Just check p_oncpu when determining if a process is executing or not.
We already did this in the SMP case, and it is now maintained in the UP
case as well, and makes the code slightly more readable.  Note that
curproc is always executing, thus the p != curproc test does not need to
be performed if the p_oncpu check is made.
2001-07-03 08:00:57 +00:00
jhb
2c2105f43d Axe spl's that are covered by the sched_lock (and have been for quite
some time.)
2001-07-03 07:53:35 +00:00
jhb
5314eff932 Include the wait message and channel for msleep() in the KTR tracepoint. 2001-07-03 07:39:06 +00:00
jhb
b471dafbdb Remove bogus need_resched() of the current CPU in roundrobin().
We don't actually need to force a context switch of the current process.
The act of firing the event triggers a context switch to softclock() and
then switching back out again which is equivalent to a preemption, thus
no further work is needed on the local CPU.
2001-07-03 05:33:09 +00:00
jhb
6c6725e602 Make the schedlock saved critical section state a per-thread property. 2001-06-30 03:11:26 +00:00
jhb
22ab704792 - Lock CURSIG() with the proc lock to close the signal race with psignal.
- Grab Giant around ktrace points.
- Clean up KTR_PROC tracepoints to not display the value of
  sched_lock.mtx_lock as it isn't really needed anymore and just obfuscates
  the messages.
- Add a few if conditions to replace gotos.
- Ensure that every msleep KTR event ends up with a matching msleep resume
  KTR event (this was broken when we didn't do a mi_switch()).
- Only note via ktrace that we resumed from a switch once rather than twice
  in several places in msleep().
- Remove spl's rom asleep and await as the proc lock and sched_lock provide
  all the needed locking.
- In mawait() add in a needed ktrace point for noting that we are about to
  switch out.
2001-06-22 23:11:26 +00:00
jhb
1bb357b4a9 Add in assertions to ensure that we always call msleep or mawait with
either a timeout or a held mutex to detect unprotected infinite sleeps
that can easily lead to deadlock.

Submitted by:	alfred
2001-05-23 19:38:26 +00:00
alfred
10dc8c8046 Remove KASSERT test for sleeping on mv_mtx, instead let WITNESS catch
it.

Requested by: jhb
2001-05-22 00:58:20 +00:00
alfred
5014c44204 remove my private assertions from tsleep.
add one assertion to ensure we don't sleep while holding vm.
2001-05-19 01:40:48 +00:00
alfred
217f9af8c7 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
jhb
ce9732eccb - Remove unneeded include of sys/ipl.h.
- Lock the process before calling killproc() to kill it for exceeding the
  maximum CPU limit.
2001-05-15 23:15:06 +00:00
jhb
556d6d4bc9 Overhaul of the SMP code. Several portions of the SMP kernel support have
been made machine independent and various other adjustments have been made
to support Alpha SMP.

- It splits the per-process portions of hardclock() and statclock() off
  into hardclock_process() and statclock_process() respectively.  hardclock()
  and statclock() call the *_process() functions for the current process so
  that UP systems will run as before.  For SMP systems, it is simply necessary
  to ensure that all other processors execute the *_process() functions when the
  main clock functions are triggered on one CPU by an interrupt.  For the alpha
  4100, clock interrupts are delievered in a staggered broadcast fashion, so
  we simply call hardclock/statclock on the boot CPU and call the *_process()
  functions on the secondaries.  For x86, we call statclock and hardclock as
  usual and then call forward_hardclock/statclock in the MD code to send an IPI
  to cause the AP's to execute forwared_hardclock/statclock which then call the
  *_process() functions.
- forward_signal() and forward_roundrobin() have been reworked to be MI and to
  involve less hackery.  Now the cpu doing the forward sets any flags, etc. and
  sends a very simple IPI_AST to the other cpu(s).  AST IPIs now just basically
  return so that they can execute ast() and don't bother with setting the
  astpending or needresched flags themselves.  This also removes the loop in
  forward_signal() as sched_lock closes the race condition that the loop worked
  around.
- need_resched(), resched_wanted() and clear_resched() have been changed to take
  a process to act on rather than assuming curproc so that they can be used to
  implement forward_roundrobin() as described above.
- Various other SMP variables have been moved to a MI subr_smp.c and a new
  header sys/smp.h declares MI SMP variables and API's.   The IPI API's from
  machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h.
- The globaldata_register() and globaldata_find() functions as well as the
  SLIST of globaldata structures has become MI and moved into subr_smp.c.
  Also, the globaldata list is only available if SMP support is compiled in.

Reviewed by:	jake, peter
Looked over by:	eivind
2001-04-27 19:28:25 +00:00
jhb
22fc91da31 Convert the allproc and proctree locks from lockmgr locks to sx locks. 2001-03-28 11:52:56 +00:00
jhb
116b99dcbc Rework the witness code to work with sx locks as well as mutexes.
- Introduce lock classes and lock objects.  Each lock class specifies a
  name and set of flags (or properties) shared by all locks of a given
  type.  Currently there are three lock classes: spin mutexes, sleep
  mutexes, and sx locks.  A lock object specifies properties of an
  additional lock along with a lock name and all of the extra stuff needed
  to make witness work with a given lock.  This abstract lock stuff is
  defined in sys/lock.h.  The lockmgr constants, types, and prototypes have
  been moved to sys/lockmgr.h.  For temporary backwards compatability,
  sys/lock.h includes sys/lockmgr.h.
- Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin
  locks held.  By making this per-cpu, we do not have to jump through
  magic hoops to deal with sched_lock changing ownership during context
  switches.
- Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with
  proc->p_sleeplocks, which is a list of held sleep locks including sleep
  mutexes and sx locks.
- Add helper macros for logging lock events via the KTR_LOCK KTR logging
  level so that the log messages are consistent.
- Add some new flags that can be passed to mtx_init():
  - MTX_NOWITNESS - specifies that this lock should be ignored by witness.
    This is used for the mutex that blocks a sx lock for example.
  - MTX_QUIET - this is not new, but you can pass this to mtx_init() now
    and no events will be logged for this lock, so that one doesn't have
    to change all the individual mtx_lock/unlock() operations.
- All lock objects maintain an initialized flag.  Use this flag to export
  a mtx_initialized() macro that can be safely called from drivers.  Also,
  we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness
  performs the corresponding checks using the initialized flag.
- The lock order reversal messages have been improved to output slightly
  more accurate file and line numbers.
2001-03-28 09:03:24 +00:00
jhb
72d3d31e96 - Proc locking.
- Remove unneeded spl()'s.
2001-03-07 03:01:53 +00:00
jhb
54253bad13 Add a mtx_assert() in maybe_resched() just to be sure it's always called
with sched_lock held.
2001-02-22 13:47:01 +00:00
jhb
eba366d935 - Use the new NOCPU constant.
- Fix a warning.

Noticed by:	bde (2)
2001-02-22 00:32:13 +00:00
jhb
c48a983569 - Don't call clear_resched() in userret(), instead, clear the resched flag
in mi_switch() just before calling cpu_switch() so that the first switch
  after a resched request will satisfy the request.
- While I'm at it, move a few things into mi_switch() and out of
  cpu_switch(), specifically set the p_oncpu and p_lastcpu members of
  proc in mi_switch(), and handle the sched_lock state change across a
  context switch in mi_switch().
- Since cpu_switch() no longer handles the sched_lock state change, we
  have to setup an initial state for sched_lock in fork_exit() before we
  release it.
2001-02-20 05:26:15 +00:00
jake
45e3d6a42c Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
jake
2cb1ba9619 Acquire sched_lock around need_resched() in roundrobin() to satisfy
assertions that it is held.  Since roundrobin() is a timeout there's
no possible way that it could be called with sched_lock held.
2001-02-10 19:07:32 +00:00
bmilekic
e67bcfcaf3 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
peter
d001e33fc3 Zap last remaining references to (and a use use of) of simple_locks. 2001-01-31 04:29:52 +00:00
jhb
0bbbf07821 - Catch up to proc flag changes.
- Add in some locking ops that might fix SIGXCPU, but don't enable them
  yet.
- Assert that sched_lock is not recursed when mi_switch() is called.
2001-01-24 11:10:55 +00:00
mjacob
eb8c448a9c Do not do the commenting out the way that saves bytes and looks cleaner
to you. Do it the way Vox Populi wants it.
2001-01-23 16:35:33 +00:00
mjacob
8bc61e5793 Move (now) unused variable declaration inside the block (now commented out). 2001-01-22 22:22:38 +00:00
jhb
4ec44ab6ef Temporarily disable the printf() for micruptime() going backwards, the
SIGXCPU signal, and killing of processes that exceed their allowed run
time until they can play nice with sched_lock.  Right now they are just
potentital panics waiting to happen.  The printf() has bitten several
people.
2001-01-20 02:57:59 +00:00
jasone
7039671ae9 Implement condition variables. 2001-01-16 01:00:43 +00:00
jake
270d128d6e Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
jake
518b4c0aea - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
jhb
85d899843b Add in #include of <sys/lock.h> since it was axed from <sys/proc.h>.
Noticed by:	Wesley Morgan <morganw@chemikals.org>
Pointy hat to:	me
2000-12-06 00:33:58 +00:00
jake
1eb5c41a73 Remove thr_sleep and thr_wakeup. Remove fields p_nthread and p_wakeup
from struct proc, which are now unused (p_nthread already was).
Remove process flag P_KTHREADP which was untested and only set
in vfs_aio.c (it should use kthread_create).  Move the yield
system call to kern_synch.c as kern_threads.c has been removed
completely.

moral support from:	alfred, jhb
2000-12-02 05:41:30 +00:00
jake
fa627a6e36 Use an mp-safe callout for endtsleep. 2000-12-01 04:55:52 +00:00
jhb
7692e140b3 Fix up priority propagation:
- Use a better test for determining when a process is running.
- Convert some checks to assertions.
- Remove unnecessary tests.
- Save the priority before acquiring a mutex rather than in msleep(9).
2000-11-30 00:51:16 +00:00
jhb
448f093221 Don't drop Giant and the passed in mutex incorrectly in the
cold || panicstr case.  Do drop the passed in mutex in that case if
PDROP is specified.
2000-11-29 18:32:50 +00:00
jake
f7b749c287 Use callout_reset instead of timeout(9). Most callouts are statically
allocated, 2 have been added to struct proc for setitimer and sleep.

Reviewed by:	jhb, jlemon
2000-11-27 22:52:31 +00:00
jake
1e59b76839 Protect the following with a lockmgr lock:
allproc
	zombproc
	pidhashtbl
	proc.p_list
	proc.p_hash
	nextpid

Reviewed by:	jhb
Obtained from:	BSD/OS and netbsd
2000-11-22 07:42:04 +00:00
jake
87763ac61f - Split the run queue and sleep queue linkage, so that a process
may block on a mutex while on the sleep queue without corrupting
it.
- Move dropping of Giant to after the acquire of sched_lock.

Tested by:	John Hay <jhay@icomtek.csir.co.za>
		jhb
2000-11-17 18:09:18 +00:00
jhb
e414c361ec Don't release and acquire Giant in mi_switch(). Instead, release and
acquire Giant as needed in functions that call mi_switch().  The releases
need to be done outside of the sched_lock to avoid potential deadlocks
from trying to acquire Giant while interrupts are disabled.

Submitted by:	witness
2000-11-16 02:16:44 +00:00
jhb
3c1c213868 Argh, add in a missing release of the sched_lock. 2000-11-16 01:16:54 +00:00
jhb
534ed28202 CURSIG() calls functions that acquire sleep mutexes, so it is not a good
idea to be holding the sched_lock while we are calling it.  As such,
release sched_lock before calling CURSIG() in msleep() and mawait() and
reacquire it after CURSIG() returns.

Submitted by:	witness
2000-11-16 01:07:19 +00:00
jhb
e8689cafdb - Rename await() to mawait(). mawait() is to await() as msleep() is to
tsleep().  Namely, mawait() takes an extra argument which is a mutex
  to drop when going to sleep.  Just as with msleep(), if the priority
  argument includes the PDROP flag, then the mutex will be dropped and will
  not be reacquired when the process wakes up.
- Add in a backwards compatible macro await() that passes in NULL as the
  mutex argument to mawait().
2000-11-15 22:39:35 +00:00
jhb
b73beb3c1c - Replace a KASSERT() that knew too much about mutex internals with a
mtx_assert() that ensures the mutex we release during msleep() is both
  not recursed and owned by the current process.
2000-11-15 22:30:48 +00:00
jhb
84b861923d - Convert references from tsleep() -> msleep()
- Fix a buglet in a comment above await()
2000-11-15 22:27:38 +00:00
jhb
9f1c0ff984 - GC some #if 0'd code regarding the non-existant safepri variable.
- Don't dink with the witness state of Giant unless we actually own it
  during mi_switch().
2000-10-20 07:52:10 +00:00
jhb
f53c62463d - Change fast interrupts on x86 to push a full interrupt frame and to
return through doreti to handle ast's.  This is necessary for the
  clock interrupts to work properly.
- Change the clock interrupts on the x86 to be fast instead of threaded.
  This is needed because both hardclock() and statclock() need to run in
  the context of the current process, not in a separate thread context.
- Kill the prevproc hack as it is no longer needed.
- We really need Giant when we call psignal(), but we don't want to block
  during the clock interrupt.  Instead, use two p_flag's in the proc struct
  to mark the current process as having a pending SIGVTALRM or a SIGPROF
  and let them be delivered during ast() when hardclock() has finished
  running.
- Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways.  It was
  broken on the x86 if it was turned on since cpl is gone.  It's only use
  was to bogusly run softclock() directly during hardclock() rather than
  scheduling an SWI.
- Remove the COM_LOCK simplelock and replace it with a clock_lock spin
  mutex.  Since the spin mutex already handles disabling/restoring
  interrupts appropriately, this also lets us axe all the *_intr() fu.
- Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use
  temporary fast interrupts for the APIC trial.
- Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending
  signals in hardclock() that are to be delivered in ast().

Submitted by:	jakeb (making statclock safe in a fast interrupt)
Submitted by:	cp (concept of delaying signals until ast())
2000-10-06 02:20:21 +00:00
jhb
087265d109 Add a KASSERT() to catch instances where the mutex that we pass in to
msleep() are recursed.

Suggested by:	cp
2000-09-24 00:33:51 +00:00
jhb
c2fb24c667 Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, just
use struct mtx, struct witness, and struct witness_blessed.

Requested by:	bde
2000-09-14 20:15:16 +00:00
jake
7d8b514c81 Rename tsleep to msleep and add a mutex argument, which is
released before sleeping and re-acquired before msleep
returns.  A compatibility cpp macro has been provided for
tsleep to avoid changing all occurences of it in the kernel.

Remove an assertion that the Giant mutex be held before
calling tsleep or asleep.

This is intended to serve the same purpose as condition
variables, but does not preclude their addition in the
future.

Approved by:	jasone
Obtained from:	BSD/OS
2000-09-11 00:20:02 +00:00