Commit Graph

32 Commits

Author SHA1 Message Date
Julian Elischer
71fad9fdee Completely redo thread states.
Reviewed by:	davidxu@freebsd.org
2002-09-11 08:13:56 +00:00
David Xu
67bdda9718 fix bogus CTR3 message.
Reviewed by: julian@freebsd.org (mentor)
2002-09-02 07:55:06 +00:00
Peter Wemm
d13947c3b0 updatepri() works on a ksegrp (where the scheduling parameters are), so
directly give it the ksegrp instead of the thread.  The only thing it used
to use in the thread was the ksegrp.

Reviewed by:	julian
2002-08-28 23:45:15 +00:00
Julian Elischer
b8e45df779 Remove code that removes thread from sleep queue before
adding it to a condvar wait.
We do not have asleep() any more so this can not happen.
2002-07-30 20:34:30 +00:00
Seigo Tanimura
133267776c In endtsleep() and cv_timedwait_end(), a thread marked TDF_TIMEOUT may
be swapped out.  Do not put such the thread directly back to the run
queue.

Spotted by:	David Xu <davidx@viasoft.com.cn>

While I am here, s/PS_TIMEOUT/TDF_TIMEOUT/.
2002-07-30 10:12:11 +00:00
Seigo Tanimura
9eb881f804 - Optimize wakeup() and its friends; if a thread waken up is being
swapped in, we do not have to ask for the scheduler thread to do
  that.

- Assert that a process is not swapped out in runq functions and
  swapout().

- Introduce thread_safetoswapout() for readability.

- In swapout_procs(), perform a test that may block (check of a
  thread working on its vm map) first.  This lets us call swapout()
  with the sched_lock held, providing a better atomicity.
2002-07-30 06:54:05 +00:00
Julian Elischer
1d7b9ed2e6 Create a new thread state to describe threads that would be ready to run
except for the fact tha they are presently swapped out. Also add a process
flag to indicate that the process has started the struggle to swap
back in. This will be  needed for the case where multiple threads
start the swapin action top a collision. Also add code to stop
a process fropm being swapped out if one of the threads in this
process is actually off running on another CPU.. that might hurt...

Submitted by:	Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
2002-07-29 18:33:32 +00:00
Andrew Gallatin
fe79953325 Allow alphas to do crashdumps: Refuse to run anything in choosethread()
after a panic which is not an interrupt thread, or the thread which
caused the panic.  Also, remove panicstr checks from msleep() and from
cv_wait() in order to allow threads to go to sleep and yeild the cpu
to the panicing thread, or to an interrupt thread which might
be doing the crashdump.

Reviewed by: jhb  (and it was mostly his idea too)
2002-07-17 02:23:44 +00:00
Julian Elischer
d5cb7e14f6 Fix failure to correctly transition back to sleep mode. 2002-07-02 05:33:46 +00:00
Julian Elischer
e602ba25fd Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by:	Almost everyone who counts
	(at various times, peter, jhb, matt, alfred, mini, bernd,
	and a cast of thousands)

	NOTE: this is still Beta code, and contains lots of debugging stuff.
	expect slight instability in signals..
2002-06-29 17:26:22 +00:00
John Baldwin
9ba7fe1b76 - Catch up to new ktrace API.
- ktrace trace points in msleep() and cv_wait() no longer need Giant.
2002-06-07 05:39:16 +00:00
Julian Elischer
628855e758 CURSIG() is not a macro so rename it cursig().
Obtained from:	KSE tree
2002-05-29 23:44:32 +00:00
Jeffrey Hsu
4bc37205bc The cold and panicstr variables do not need to be protected by sched_lock.
Submitted by:	Jennifer Yang (yangjihui@yahoo.com)
Reviewed by:	jake & jhb in principle
2002-04-23 19:50:22 +00:00
Dan Moschuk
e7876c0943 Nuke CV_DEBUG in favour of INVARIANTS.
Approved by: jhb
2002-03-30 03:52:52 +00:00
Julian Elischer
2c1007663f In a threaded world, differnt priorirites become properties of
different entities.  Make it so.

Reviewed by:	jhb@freebsd.org (john baldwin)
2002-02-11 20:37:54 +00:00
John Baldwin
c86b6ff551 Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:

The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe.  Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer.  This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs.  Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called.  (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)

I've tested the changes to the interrupt code on i386 and alpha.  I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine.  PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken.  Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.

Reviewed by:	peter
Tested on:	i386, alpha
2002-01-05 08:47:13 +00:00
David E. O'Brien
a48740b6c5 Update to C99, s/__FUNCTION__/__func__/. 2001-12-10 05:51:45 +00:00
Peter Wemm
66f769fe39 Add missing ; in last commit
Pointy-hat-to: jhb
2001-09-19 02:53:59 +00:00
John Baldwin
9ef3a9855d Use a 'p' variable instead of repetitively indirecting td->td_proc for
signal things that are still per-process and won't be per-thread.
2001-09-18 23:27:06 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
John Baldwin
91a4536f22 - Fix a bug in the previous workaround for the tsleep/endtsleep race.
callout_stop() would fail in two cases:
    1) The timeout was currently executing, and
    2) The timeout had already executed.
  We only needed to work around the race for 1).  We caught some instances
  of 2) via the PS_TIMEOUT flag, however, if endtsleep() fired after the
  process had been woken up but before it had resumed execution,
  PS_TIMEOUT would not be set, but callout_stop() would fail, so we
  would block the process until endtsleep() resumed it.  Except that
  endtsleep() had already run and couldn't resume it.  This adds a new flag
  PS_TIMOFAIL to indicate the case of 2) when PS_TIMEOUT isn't set.
- Implement this race fix for condition variables as well.

Tested by:	sos
2001-08-21 18:42:45 +00:00
Jake Burkholder
d652b3d918 Backout mwakeup, etc. 2001-07-06 01:16:43 +00:00
Jake Burkholder
9316aed2ef Implement mwakeup, mwakeup_one, cv_signal_drop and cv_broadcast_drop.
These take an additional mutex argument, which is dropped before any
processes are made runnable.  This can avoid contention on the mutex
if the processes would immediately acquire it, and is done in such a
way that wakeups will not be lost.

Reviewed by:	jhb
2001-07-04 00:32:50 +00:00
John Baldwin
87f9ffb805 - Lock CURSIG with the proc lock and don't release the proc lock until
after grabbing the sched lock to close a race.
- Lock ktrace points with Giant.
2001-06-22 23:06:38 +00:00
Mark Murray
fb919e4d5a Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
John Baldwin
c739adbf42 Pass in a pointer to the mutex's lock_object as the second argument to
WITNESS_SLEEP() rather than the mutex itself.
2001-03-28 10:41:15 +00:00
John Baldwin
192846463a Rework the witness code to work with sx locks as well as mutexes.
- Introduce lock classes and lock objects.  Each lock class specifies a
  name and set of flags (or properties) shared by all locks of a given
  type.  Currently there are three lock classes: spin mutexes, sleep
  mutexes, and sx locks.  A lock object specifies properties of an
  additional lock along with a lock name and all of the extra stuff needed
  to make witness work with a given lock.  This abstract lock stuff is
  defined in sys/lock.h.  The lockmgr constants, types, and prototypes have
  been moved to sys/lockmgr.h.  For temporary backwards compatability,
  sys/lock.h includes sys/lockmgr.h.
- Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin
  locks held.  By making this per-cpu, we do not have to jump through
  magic hoops to deal with sched_lock changing ownership during context
  switches.
- Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with
  proc->p_sleeplocks, which is a list of held sleep locks including sleep
  mutexes and sx locks.
- Add helper macros for logging lock events via the KTR_LOCK KTR logging
  level so that the log messages are consistent.
- Add some new flags that can be passed to mtx_init():
  - MTX_NOWITNESS - specifies that this lock should be ignored by witness.
    This is used for the mutex that blocks a sx lock for example.
  - MTX_QUIET - this is not new, but you can pass this to mtx_init() now
    and no events will be logged for this lock, so that one doesn't have
    to change all the individual mtx_lock/unlock() operations.
- All lock objects maintain an initialized flag.  Use this flag to export
  a mtx_initialized() macro that can be safely called from drivers.  Also,
  we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness
  performs the corresponding checks using the initialized flag.
- The lock order reversal messages have been improved to output slightly
  more accurate file and line numbers.
2001-03-28 09:03:24 +00:00
John Baldwin
28aa95b6ee Use the proc lock to protect access to p_sigacts->ps_sigintr. 2001-03-07 03:26:39 +00:00
Jake Burkholder
d5a08a6065 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
John Baldwin
981808d145 Catch up to P_FOO -> PS_FOO changes in proc flags. 2001-01-24 10:44:01 +00:00
Jason Evans
238510fc46 Implement condition variables. 2001-01-16 01:00:43 +00:00