Commit Graph

314 Commits

Author SHA1 Message Date
jhb
e4bcd25517 Replace calls to WITNESS_SLEEP() and witness_list() with equivalent calls
to WITNESS_WARN().
2003-03-04 21:03:05 +00:00
harti
e30134bc39 When a process has been waiting on a condition variable or mutex the
td_wmesg field in the thread structure points to the description string of
the condition variable or mutex. If the condvar or the mutex had been
initialized from a loadable module that was unloaded in the meantime,
td_wmesg may now point to invalid memory. Retrieving the process table now
may panic the kernel (or access junk). Setting the td_wmesg field to NULL
after unblocking on the condvar/mutex prevents this panic.

PR:		kern/47408
Approved by:	jake (mentor)
2003-02-27 08:43:27 +00:00
julian
3fc9836d46 Change the process flags P_KSES to be P_THREADED.
This is just a cosmetic change but I've been meaning to do it for about a year.
2003-02-27 02:05:19 +00:00
julian
af55753a06 Move a bunch of flags from the KSE to the thread.
I was in two minds as to where to put them in the first case..
I should have listenned to the other mind.

Submitted by:	 parts by davidxu@
Reviewed by:	jeff@ mini@
2003-02-17 09:55:10 +00:00
alfred
952ec160dd Print a backtrace in case we tsleep from inside of DDB. 2003-02-14 12:44:07 +00:00
julian
dde96893c9 Add code to ddb to allow backtracing an arbitrary thread.
(show thread {address})

Remove the IDLE kse state and replace it with a change in
the way threads sahre KSEs. Every KSE now has a thread, which is
considered its "owner" however a KSE may also be lent to other
threads in the same group to allow completion of in-kernel work.
n this case the owner remains the same and the KSE will revert to the
owner when the other work has been completed.

All creations of upcalls etc. is now done from
kse_reassign() which in turn is called from mi_switch or
thread_exit(). This means that special code can be removed from
msleep() and cv_wait().

kse_release() does not leave a KSE with no thread any more but
converts the existing thread into teh KSE's owner, and sets it up
for doing an upcall. It is just inhibitted from being scheduled until
there is some reason to do an upcall.

Remove all trace of the kse_idle queue since it is no-longer needed.
"Idle" KSEs are now on the loanable queue.
2002-12-28 01:23:07 +00:00
julian
9868d96f1f Unbreak the KSE code. Keep track of zobie threads using the Per-CPU storage
during the context switch. Rearrange thread cleanups
to avoid problems with Giant. Clean threads when freed or
when recycled.

Approved by:	re (jhb)
2002-12-10 02:33:45 +00:00
jeff
07d36defef - Move FSCALE back to kern_sync. This is not scheduler specific.
- Create a new callout for lbolt and move it out of schedcpu().  This is not
   scheduler specific either.

Approved by:	re
2002-11-21 08:57:08 +00:00
davidxu
7531bd3c2f Add an actual implementation of kse_thr_interrupt() 2002-10-30 02:28:41 +00:00
julian
4ea837f673 More work on the interaction between suspending and sleeping threads.
Also clean up some code used with 'single-threading'.

Reviewed by:	davidxu
2002-10-25 07:11:12 +00:00
jeff
ef4d4e378e - Create a new scheduler api that is defined in sys/sched.h
- Begin moving scheduler specific functionality into sched_4bsd.c
 - Replace direct manipulation of scheduler data with hooks provided by the
   new api.
 - Remove KSE specific state modifications and single runq assumptions from
   kern_switch.c

Reviewed by:	-arch
2002-10-12 05:32:24 +00:00
jhb
7cc0ed53c2 - Move p_cpulimit to struct proc from struct plimit and protect it with
sched_lock.  This means that we no longer access p_limit in mi_switch()
  and the p_limit pointer can be protected by the proc lock.
- Remove PRS_ZOMBIE check from CPU limit test in mi_switch().  PRS_ZOMBIE
  processes don't call mi_switch(), and even if they did there is no longer
  the danger of p_limit being NULL (which is what the original zombie check
  was added for).
- When we bump the current processes soft CPU limit in ast(), just bump the
  private p_cpulimit instead of the shared rlimit.  This fixes an XXX for
  some value of fix.  There is still a (probably benign) bug in that this
  code doesn't check that the new soft limit exceeds the hard limit.

Inspired by:	bde (2)
2002-10-09 17:17:24 +00:00
julian
6b6ba96b60 Round out the facilty for a 'bound' thread to loan out its KSE
in specific situations. The owner thread must be blocked, and the
borrower can not proceed back to user space with the borrowed KSE.
The borrower will return the KSE on the next context switch where
teh owner wants it back. This removes a lot of possible
race conditions and deadlocks. It is consceivable that the
borrower should inherit the priority of the owner too.
that's another discussion and would be simple to do.

Also, as part of this, the "preallocatd spare thread" is attached to the
thread doing a syscall rather than the KSE. This removes the need to lock
the scheduler when we want to access it, as it's now "at hand".

DDB now shows a lot mor info for threaded proceses though it may need
some optimisation to squeeze it all back into 80 chars again.
(possible JKH project)

Upcalls are now "bound" threads, but "KSE Lending" now means that
other completing syscalls can be completed using that KSE before the upcall
finally makes it back to the UTS. (getting threads OUT OF THE KERNEL is
one of the highest priorities in the KSE system.) The upcall when it happens
will present all the completed syscalls to the KSE for selection.
2002-10-09 02:33:36 +00:00
jmallett
2eba9107fd XXX Add a check for p->p_limit being NULL before dereferencing it. This is
totally bogus but will hide the occurances of access of 0xbc(NULL) which
people have run into lately.  This is not a proper fix, just a bandaid, until
the cause of this happening is tracked down and fixed.

Reviewed by:	rwatson
2002-10-03 04:09:00 +00:00
jhb
8c9a393a04 Rename the mutex thread and process states to use a more generic 'LOCK'
name instead.  (e.g., SLOCK instead of SMTX, TD_ON_LOCK() instead of
TD_ON_MUTEX())  Eventually a turnstile abstraction will be added that
will be shared with mutexes and other types of locks.  SLOCK/TDI_LOCK will
be used internally by the turnstile code and will not be specific to
mutexes.  Making the change now ensures that turnstiles can be dropped
in at a later date without affecting the ABI of userland applications.
2002-10-02 20:31:47 +00:00
jhb
02678fb50e - Adjust comment noting that handling of CPU limit exhaustion is done in
ast().
- Actually set KEF_ASTPENDING so ast() is called.  I think this is buggy
  for a process with multiple KSE's in that PS_XCPU is not a KSE event,
  it's a process-wide event.  IMO there really should probably be two
  ASTPENDING flags, one for per-process, and one for per-KSE.

Submitted by:	bde
2002-10-01 14:10:08 +00:00
jhb
f350519764 - Add a new per-process flag PS_XCPU to indicate that at least one thread
has exceeded its CPU time limit.
- In mi_switch(), set PS_XCPU when the CPU time limit is exceeded.
- Perform actual CPU time limit exceeded work in ast() when PS_XCPU is set.

Requested by:	many
2002-09-30 21:13:54 +00:00
julian
d91c37553e Implement basic KSE loaning. This stops a hread that is blocked in BOUND mode
from stopping another thread from completing a syscall, and this allows it to
release its resources etc. Probably more related commits to follow (at least
one I know of)

Initial concept by: julian, dillon
Submitted by:	davidxu
2002-09-29 23:04:34 +00:00
julian
5702a380a5 Completely redo thread states.
Reviewed by:	davidxu@freebsd.org
2002-09-11 08:13:56 +00:00
julian
b085f4e6d2 Rejig the code to figure out estcpu and work out how long a KSEGRP has been
idle. What was there before was surprisingly ALMOST correct.

Peter and I fried our brains on this for a couple of hours figuring out
what this actually means in the context of multiple threads.

Reviewed by:	peter@freebsd.org
2002-08-30 00:25:49 +00:00
peter
a225758933 updatepri() works on a ksegrp (where the scheduling parameters are), so
directly give it the ksegrp instead of the thread.  The only thing it used
to use in the thread was the ksegrp.

Reviewed by:	julian
2002-08-28 23:45:15 +00:00
julian
b3aca85def Slight cleanup of some comments/whitespace.
Make idle process state more consistant.
Add an assert on thread state.
Clean up idleproc/mi_switch() interaction.
Use a local instead of referencing curthread 7 times in a row
(I've been told curthread can be expensive on some architectures)
Remove some commented out code.
Add a little commented out code (completion coming soon)

Reviewed by:	jhb@freebsd.org
2002-08-01 18:45:10 +00:00
tanimura
bc06c086d9 In endtsleep() and cv_timedwait_end(), a thread marked TDF_TIMEOUT may
be swapped out.  Do not put such the thread directly back to the run
queue.

Spotted by:	David Xu <davidx@viasoft.com.cn>

While I am here, s/PS_TIMEOUT/TDF_TIMEOUT/.
2002-07-30 10:12:11 +00:00
tanimura
8eb7238cad - Optimize wakeup() and its friends; if a thread waken up is being
swapped in, we do not have to ask for the scheduler thread to do
  that.

- Assert that a process is not swapped out in runq functions and
  swapout().

- Introduce thread_safetoswapout() for readability.

- In swapout_procs(), perform a test that may block (check of a
  thread working on its vm map) first.  This lets us call swapout()
  with the sched_lock held, providing a better atomicity.
2002-07-30 06:54:05 +00:00
julian
6216c4b163 Create a new thread state to describe threads that would be ready to run
except for the fact tha they are presently swapped out. Also add a process
flag to indicate that the process has started the struggle to swap
back in. This will be  needed for the case where multiple threads
start the swapin action top a collision. Also add code to stop
a process fropm being swapped out if one of the threads in this
process is actually off running on another CPU.. that might hurt...

Submitted by:	Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
2002-07-29 18:33:32 +00:00
julian
d156229356 slight stylisations to take into account recent code changes. 2002-07-24 23:59:15 +00:00
julian
d1310fef13 Fix a reversed test.
Fix some style nits.
Fix a KASSERT message.
Add/fix some comments.

Submitted by:	bde@freebsd.org
2002-07-17 19:20:48 +00:00
jhb
0ae44e9908 Add a KASSERT() to assert that td_critnest is == 1 when mi_switch() is
called.
2002-07-17 02:46:13 +00:00
gallatin
b6368d3510 Allow alphas to do crashdumps: Refuse to run anything in choosethread()
after a panic which is not an interrupt thread, or the thread which
caused the panic.  Also, remove panicstr checks from msleep() and from
cv_wait() in order to allow threads to go to sleep and yeild the cpu
to the panicing thread, or to an interrupt thread which might
be doing the crashdump.

Reviewed by: jhb  (and it was mostly his idea too)
2002-07-17 02:23:44 +00:00
julian
d84464f213 Thinking about it I came to the conclusion that the KSE states were incorrectly
formulated.  The correct states should be:
IDLE:  On the idle KSE list for that KSEG
RUNQ:  Linked onto the system run queue.
THREAD: Attached to a thread and slaved to whatever state the thread is in.

This means that most places where we were adjusting kse state can go away
as it is just moving around because the thread is..
The only places we need to adjust the KSE state is in transition to and from
the idle and run queues.

Reviewed by:	jhb@freebsd.org
2002-07-14 03:43:33 +00:00
julian
660340d306 oops, state cannot be two different values at once..
use || instead of &&
2002-07-14 01:36:48 +00:00
dillon
dc5d856e71 Re-enable the idle page-zeroing code. Remove all IPIs from the idle
page-zeroing code as well as from the general page-zeroing code and use a
lazy tlb page invalidation scheme based on a callback made at the end
of mi_switch.

A number of people came up with this idea at the same time so credit
belongs to Peter, John, and Jake as well.

Two-way SMP buildworld -j 5 tests (second run, after stabilization)
    2282.76 real  2515.17 user  704.22 sys	before peter's IPI commit
    2266.69 real  2467.50 user  633.77 sys	after peter's commit
    2232.80 real  2468.99 user  615.89 sys	after this commit

Reviewed by:	peter, jhb
Approved by:	peter
2002-07-12 20:17:06 +00:00
julian
4037477f94 make this repect ps_sigintr if there is a pre-existing signal
or suspension request.

Submitted by:	David Xu
2002-07-06 08:47:24 +00:00
julian
7e464b4d34 Fix at least one of the things wrong with signals
^Z should work a lot better now.

Submitted by:	peter@freebsd.org
2002-07-06 02:45:11 +00:00
julian
27e23d9345 Try clean up some of the mess that resulted from layers and layers
of p4 merges from -current as things started getting different.

Corroborated by: Similar patches just mailed by BDE.
2002-07-03 09:15:20 +00:00
julian
3480b426f5 When going back to SLEEP state, make sure our
State is correctly marked so.
2002-07-02 05:40:51 +00:00
julian
aa2dc0a5d9 Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by:	Almost everyone who counts
	(at various times, peter, jhb, matt, alfred, mini, bernd,
	and a cast of thousands)

	NOTE: this is still Beta code, and contains lots of debugging stuff.
	expect slight instability in signals..
2002-06-29 17:26:22 +00:00
alfred
97873dcbf3 more caddr_t removal. 2002-06-29 02:00:02 +00:00
dillon
3bb3ab3df2 I Noticed a defect in the way wakeup() scans the tailq. Tor noticed an
even worse defect in wakeup_one().  This patch cleans up both.

Submitted by:	tegge
MFC after:	3 days
2002-06-24 00:14:36 +00:00
jhb
fd3d90c2c8 - Catch up to new ktrace API.
- ktrace trace points in msleep() and cv_wait() no longer need Giant.
2002-06-07 05:39:16 +00:00
julian
304195369e CURSIG() is not a macro so rename it cursig().
Obtained from:	KSE tree
2002-05-29 23:44:32 +00:00
jhb
bd383063f6 Minor nit: get p pointer in msleep() from td->td_proc (where
td == curthread) rather than from curproc.
2002-05-23 04:14:18 +00:00
alfred
357e37e023 Remove __P. 2002-03-19 21:25:46 +00:00
peter
83444279ce Fix a gcc-3.1+ warning.
warning: deprecated use of label at end of compound statement

ie: you cannot do this anymore:
switch(foo) {
....

default:
}
2002-03-19 11:02:06 +00:00
phk
fa959f1afd Convert p->p_runtime and PCPU(switchtime) to bintime format. 2002-02-22 13:32:01 +00:00
julian
37369620df In a threaded world, differnt priorirites become properties of
different entities.  Make it so.

Reviewed by:	jhb@freebsd.org (john baldwin)
2002-02-11 20:37:54 +00:00
jhb
1ce407b675 Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:

The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe.  Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer.  This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs.  Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called.  (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)

I've tested the changes to the interrupt code on i386 and alpha.  I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine.  PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken.  Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.

Reviewed by:	peter
Tested on:	i386, alpha
2002-01-05 08:47:13 +00:00
jhb
a3b98398cb Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
luigi
4893656ff8 Add/correct description for some sysctl variables where it was missing.
The description field is unused in -stable, so the MFC there is equivalent
to a comment. It can be done at any time, i am just setting a reminder
in 45 days when hopefully we are past 4.5-release.

MFC after: 45 days
2001-12-16 16:07:20 +00:00
jhb
972de970f5 Assert that Giant is not held in mi_switch() unless the process state
is SMTX or SRUN.
2001-10-23 17:52:49 +00:00
iedowse
af95e6836c Introduce some jitter to the timing of the samples that determine
the system load average. Previously, the load average measurement
was susceptible to synchronisation with processes that run at
regular intervals such as the system bufdaemon process.

Each interval is now chosen at random within the range of 4 to 6
seconds. This large variation is chosen so that over the shorter
5-minute load average timescale there is a good dispersion of
samples across the 5-second sample period (the time to perform 60
5-second samples now has a standard deviation of approx 4.5 seconds).
2001-10-20 16:07:17 +00:00
iedowse
03bd269b08 Move the code that computes the system load average from vm_meter.c
to kern_synch.c in preparation for adding some jitter to the
inter-sample time.

Note that the "vm.loadavg" sysctl still lives in vm_meter.c which
isn't the right place, but it is appropriate for the current (bad)
name of that sysctl.

Suggested by:	jhb (some time ago)
Reviewed by:	bde
2001-10-20 13:10:43 +00:00
jhb
dc14d7b097 GC some #if 0'd code. 2001-09-21 19:21:18 +00:00
jhb
fda5e26af0 Whitespace and spelling fixes. 2001-09-21 19:16:12 +00:00
julian
5596676e6c KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
dillon
4b176348af Make yield() MPSAFE.
Synchronize syscalls.master with all MPSAFE changes to date.  Synchronize
new syscall generation follows because yield() will panic if it is out
of sync with syscalls.master.
2001-09-01 03:54:09 +00:00
jhb
88c481e309 Release the sched_lock before bombing out in mi_switch() via db_error().
This makes things slightly easier if you call a function that calls
mi_switch() as it keeps the locking before and after closer.
2001-08-21 23:10:37 +00:00
jhb
2d8b874618 Add a hook to mi_switch() to abort via db_error() if we attempt to
perform a context switch from DDB.

Consulting from:	bde
2001-08-21 20:09:05 +00:00
jhb
4f6c3a9342 - Fix a bug in the previous workaround for the tsleep/endtsleep race.
callout_stop() would fail in two cases:
    1) The timeout was currently executing, and
    2) The timeout had already executed.
  We only needed to work around the race for 1).  We caught some instances
  of 2) via the PS_TIMEOUT flag, however, if endtsleep() fired after the
  process had been woken up but before it had resumed execution,
  PS_TIMEOUT would not be set, but callout_stop() would fail, so we
  would block the process until endtsleep() resumed it.  Except that
  endtsleep() had already run and couldn't resume it.  This adds a new flag
  PS_TIMOFAIL to indicate the case of 2) when PS_TIMEOUT isn't set.
- Implement this race fix for condition variables as well.

Tested by:	sos
2001-08-21 18:42:45 +00:00
jhb
4a89454dcd - Close races with signals and other AST's being triggered while we are in
the process of exiting the kernel.  The ast() function now loops as long
  as the PS_ASTPENDING or PS_NEEDRESCHED flags are set.  It returns with
  preemption disabled so that any further AST's that arrive via an
  interrupt will be delayed until the low-level MD code returns to user
  mode.
- Use u_int's to store the tick counts for profiling purposes so that we
  do not need sched_lock just to read p_sticks.  This also closes a
  problem where the call to addupc_task() could screw up the arithmetic
  due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
  clear_resched(), and resched_wanted() in favor of direct bit operations
  on p_sflag.
- Fix up locking with sched_lock some.  In addupc_intr(), use sched_lock
  to ensure pr_addr and pr_ticks are updated atomically with setting
  PS_OWEUPC.  In ast() we clear pr_ticks atomically with clearing
  PS_OWEUPC.  We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.

Reviewed by:	bde (mostly)
2001-08-10 22:53:32 +00:00
jhb
54a0ded8d4 Work around a race between msleep() and endtsleep() where it was possible
for endtsleep() to be executing when msleep() resumed, for endtsleep()
to spin on sched_lock long enough for the other process to loop on
msleep() and sleep again resulting in endtsleep() waking up the "wrong"
msleep.

Obtained from:	BSD/OS
2001-08-10 21:08:56 +00:00
jhb
18c8b84e2f Style nit: covert a couple of if (p_wchan) tests to if (p_wchan != NULL). 2001-08-10 20:56:25 +00:00
jhb
2ff1c253cd - Remove asleep(), await(), and M_ASLEEP.
- Callers of asleep() and await() have been converted to calling tsleep().
  The only caller outside of M_ASLEEP was the ata driver, which called both
  asleep() and await() with spl-raised, so there was no need for the
  asleep() and await() pair.  M_ASLEEP was unused.

Reviewed by:	jasone, peter
2001-08-10 06:37:05 +00:00
jhb
e712875281 Use 'p' instead of the potentially more expensive 'curproc' inside of
mi_switch().
2001-08-02 22:15:31 +00:00
jhb
6394826cde Apply the cluebat to myself and undo the await() -> mawait() rename. The
asleep() and await() functions split the functionality of msleep() up into
two halves.  Only the asleep() half (which is what puts the process on the
sleep queue) actually needs the lock usually passed to msleep() held to
prevent lost wakeups.  await() does not need the lock held, so the lock
can be released prior to calling await() and does not need to be passed in
to the await() function.  Typical usage of these functions would be as
follows:

        mtx_lock(&foo_mtx);
        ... do stuff ...
        asleep(&foo_cond, PRIxx, "foowt", hz);
        ...
        mtx_unlock&foo_mtx);
        ...
        await(-1, -1);

Inspired by:	dillon on the couch at Usenix
2001-07-31 22:06:56 +00:00
jhb
a5cd152fc8 Add a safety belt to mawait() for the (cold || panicstr) case identical to
the one in msleep() such that we return immediately rather than blocking.

Submitted by:	peter
Prodded by:	sheldonh
2001-07-31 20:57:57 +00:00
jake
0227d4f3f6 Backout mwakeup, etc. 2001-07-06 01:16:43 +00:00
jake
33e85623fa Implement mwakeup, mwakeup_one, cv_signal_drop and cv_broadcast_drop.
These take an additional mutex argument, which is dropped before any
processes are made runnable.  This can avoid contention on the mutex
if the processes would immediately acquire it, and is done in such a
way that wakeups will not be lost.

Reviewed by:	jhb
2001-07-04 00:32:50 +00:00
jhb
74c2e58245 Remove commented-out garbage that skipped updating schedcpu() stats for
ithreads in SWAIT.
2001-07-03 08:03:56 +00:00
jhb
141f7800c8 Just check p_oncpu when determining if a process is executing or not.
We already did this in the SMP case, and it is now maintained in the UP
case as well, and makes the code slightly more readable.  Note that
curproc is always executing, thus the p != curproc test does not need to
be performed if the p_oncpu check is made.
2001-07-03 08:00:57 +00:00
jhb
f8917af0a6 Axe spl's that are covered by the sched_lock (and have been for quite
some time.)
2001-07-03 07:53:35 +00:00
jhb
d5b88d1293 Include the wait message and channel for msleep() in the KTR tracepoint. 2001-07-03 07:39:06 +00:00
jhb
774b040e8a Remove bogus need_resched() of the current CPU in roundrobin().
We don't actually need to force a context switch of the current process.
The act of firing the event triggers a context switch to softclock() and
then switching back out again which is equivalent to a preemption, thus
no further work is needed on the local CPU.
2001-07-03 05:33:09 +00:00
jhb
dcedf89d63 Make the schedlock saved critical section state a per-thread property. 2001-06-30 03:11:26 +00:00
jhb
dfa68807f4 - Lock CURSIG() with the proc lock to close the signal race with psignal.
- Grab Giant around ktrace points.
- Clean up KTR_PROC tracepoints to not display the value of
  sched_lock.mtx_lock as it isn't really needed anymore and just obfuscates
  the messages.
- Add a few if conditions to replace gotos.
- Ensure that every msleep KTR event ends up with a matching msleep resume
  KTR event (this was broken when we didn't do a mi_switch()).
- Only note via ktrace that we resumed from a switch once rather than twice
  in several places in msleep().
- Remove spl's rom asleep and await as the proc lock and sched_lock provide
  all the needed locking.
- In mawait() add in a needed ktrace point for noting that we are about to
  switch out.
2001-06-22 23:11:26 +00:00
jhb
3f4e4d353c Add in assertions to ensure that we always call msleep or mawait with
either a timeout or a held mutex to detect unprotected infinite sleeps
that can easily lead to deadlock.

Submitted by:	alfred
2001-05-23 19:38:26 +00:00
alfred
81a30a0ee0 Remove KASSERT test for sleeping on mv_mtx, instead let WITNESS catch
it.

Requested by: jhb
2001-05-22 00:58:20 +00:00
alfred
8aa6c173b0 remove my private assertions from tsleep.
add one assertion to ensure we don't sleep while holding vm.
2001-05-19 01:40:48 +00:00
alfred
a3f0842419 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
jhb
1ee607ffea - Remove unneeded include of sys/ipl.h.
- Lock the process before calling killproc() to kill it for exceeding the
  maximum CPU limit.
2001-05-15 23:15:06 +00:00
jhb
8bfdafc934 Overhaul of the SMP code. Several portions of the SMP kernel support have
been made machine independent and various other adjustments have been made
to support Alpha SMP.

- It splits the per-process portions of hardclock() and statclock() off
  into hardclock_process() and statclock_process() respectively.  hardclock()
  and statclock() call the *_process() functions for the current process so
  that UP systems will run as before.  For SMP systems, it is simply necessary
  to ensure that all other processors execute the *_process() functions when the
  main clock functions are triggered on one CPU by an interrupt.  For the alpha
  4100, clock interrupts are delievered in a staggered broadcast fashion, so
  we simply call hardclock/statclock on the boot CPU and call the *_process()
  functions on the secondaries.  For x86, we call statclock and hardclock as
  usual and then call forward_hardclock/statclock in the MD code to send an IPI
  to cause the AP's to execute forwared_hardclock/statclock which then call the
  *_process() functions.
- forward_signal() and forward_roundrobin() have been reworked to be MI and to
  involve less hackery.  Now the cpu doing the forward sets any flags, etc. and
  sends a very simple IPI_AST to the other cpu(s).  AST IPIs now just basically
  return so that they can execute ast() and don't bother with setting the
  astpending or needresched flags themselves.  This also removes the loop in
  forward_signal() as sched_lock closes the race condition that the loop worked
  around.
- need_resched(), resched_wanted() and clear_resched() have been changed to take
  a process to act on rather than assuming curproc so that they can be used to
  implement forward_roundrobin() as described above.
- Various other SMP variables have been moved to a MI subr_smp.c and a new
  header sys/smp.h declares MI SMP variables and API's.   The IPI API's from
  machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h.
- The globaldata_register() and globaldata_find() functions as well as the
  SLIST of globaldata structures has become MI and moved into subr_smp.c.
  Also, the globaldata list is only available if SMP support is compiled in.

Reviewed by:	jake, peter
Looked over by:	eivind
2001-04-27 19:28:25 +00:00
jhb
79cf991a6b Convert the allproc and proctree locks from lockmgr locks to sx locks. 2001-03-28 11:52:56 +00:00
jhb
0c490fd02e Rework the witness code to work with sx locks as well as mutexes.
- Introduce lock classes and lock objects.  Each lock class specifies a
  name and set of flags (or properties) shared by all locks of a given
  type.  Currently there are three lock classes: spin mutexes, sleep
  mutexes, and sx locks.  A lock object specifies properties of an
  additional lock along with a lock name and all of the extra stuff needed
  to make witness work with a given lock.  This abstract lock stuff is
  defined in sys/lock.h.  The lockmgr constants, types, and prototypes have
  been moved to sys/lockmgr.h.  For temporary backwards compatability,
  sys/lock.h includes sys/lockmgr.h.
- Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin
  locks held.  By making this per-cpu, we do not have to jump through
  magic hoops to deal with sched_lock changing ownership during context
  switches.
- Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with
  proc->p_sleeplocks, which is a list of held sleep locks including sleep
  mutexes and sx locks.
- Add helper macros for logging lock events via the KTR_LOCK KTR logging
  level so that the log messages are consistent.
- Add some new flags that can be passed to mtx_init():
  - MTX_NOWITNESS - specifies that this lock should be ignored by witness.
    This is used for the mutex that blocks a sx lock for example.
  - MTX_QUIET - this is not new, but you can pass this to mtx_init() now
    and no events will be logged for this lock, so that one doesn't have
    to change all the individual mtx_lock/unlock() operations.
- All lock objects maintain an initialized flag.  Use this flag to export
  a mtx_initialized() macro that can be safely called from drivers.  Also,
  we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness
  performs the corresponding checks using the initialized flag.
- The lock order reversal messages have been improved to output slightly
  more accurate file and line numbers.
2001-03-28 09:03:24 +00:00
jhb
3ef36efa7c - Proc locking.
- Remove unneeded spl()'s.
2001-03-07 03:01:53 +00:00
jhb
e5173c046a Add a mtx_assert() in maybe_resched() just to be sure it's always called
with sched_lock held.
2001-02-22 13:47:01 +00:00
jhb
a05021f435 - Use the new NOCPU constant.
- Fix a warning.

Noticed by:	bde (2)
2001-02-22 00:32:13 +00:00
jhb
27efeb0d30 - Don't call clear_resched() in userret(), instead, clear the resched flag
in mi_switch() just before calling cpu_switch() so that the first switch
  after a resched request will satisfy the request.
- While I'm at it, move a few things into mi_switch() and out of
  cpu_switch(), specifically set the p_oncpu and p_lastcpu members of
  proc in mi_switch(), and handle the sched_lock state change across a
  context switch in mi_switch().
- Since cpu_switch() no longer handles the sched_lock state change, we
  have to setup an initial state for sched_lock in fork_exit() before we
  release it.
2001-02-20 05:26:15 +00:00
jake
55d5108ac5 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
jake
8b17f4e83c Acquire sched_lock around need_resched() in roundrobin() to satisfy
assertions that it is held.  Since roundrobin() is a timeout there's
no possible way that it could be called with sched_lock held.
2001-02-10 19:07:32 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
peter
6150a50174 Zap last remaining references to (and a use use of) of simple_locks. 2001-01-31 04:29:52 +00:00
jhb
c0f4ac74f2 - Catch up to proc flag changes.
- Add in some locking ops that might fix SIGXCPU, but don't enable them
  yet.
- Assert that sched_lock is not recursed when mi_switch() is called.
2001-01-24 11:10:55 +00:00
mjacob
b4e8c12b24 Do not do the commenting out the way that saves bytes and looks cleaner
to you. Do it the way Vox Populi wants it.
2001-01-23 16:35:33 +00:00
mjacob
6c5381dc64 Move (now) unused variable declaration inside the block (now commented out). 2001-01-22 22:22:38 +00:00
jhb
5e8c3954d5 Temporarily disable the printf() for micruptime() going backwards, the
SIGXCPU signal, and killing of processes that exceed their allowed run
time until they can play nice with sched_lock.  Right now they are just
potentital panics waiting to happen.  The printf() has bitten several
people.
2001-01-20 02:57:59 +00:00
jasone
20a8a23d2b Implement condition variables. 2001-01-16 01:00:43 +00:00
jake
4f5d8ed825 Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
jake
a4ad237eaa - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
jhb
5a90ca8753 Add in #include of <sys/lock.h> since it was axed from <sys/proc.h>.
Noticed by:	Wesley Morgan <morganw@chemikals.org>
Pointy hat to:	me
2000-12-06 00:33:58 +00:00
jake
7cd1ca1cdc Remove thr_sleep and thr_wakeup. Remove fields p_nthread and p_wakeup
from struct proc, which are now unused (p_nthread already was).
Remove process flag P_KTHREADP which was untested and only set
in vfs_aio.c (it should use kthread_create).  Move the yield
system call to kern_synch.c as kern_threads.c has been removed
completely.

moral support from:	alfred, jhb
2000-12-02 05:41:30 +00:00
jake
95fb73d495 Use an mp-safe callout for endtsleep. 2000-12-01 04:55:52 +00:00
jhb
ab556afac4 Fix up priority propagation:
- Use a better test for determining when a process is running.
- Convert some checks to assertions.
- Remove unnecessary tests.
- Save the priority before acquiring a mutex rather than in msleep(9).
2000-11-30 00:51:16 +00:00
jhb
896b5da519 Don't drop Giant and the passed in mutex incorrectly in the
cold || panicstr case.  Do drop the passed in mutex in that case if
PDROP is specified.
2000-11-29 18:32:50 +00:00
jake
9326f655fc Use callout_reset instead of timeout(9). Most callouts are statically
allocated, 2 have been added to struct proc for setitimer and sleep.

Reviewed by:	jhb, jlemon
2000-11-27 22:52:31 +00:00
jake
0c0be4e826 Protect the following with a lockmgr lock:
allproc
	zombproc
	pidhashtbl
	proc.p_list
	proc.p_hash
	nextpid

Reviewed by:	jhb
Obtained from:	BSD/OS and netbsd
2000-11-22 07:42:04 +00:00
jake
3a97b3e213 - Split the run queue and sleep queue linkage, so that a process
may block on a mutex while on the sleep queue without corrupting
it.
- Move dropping of Giant to after the acquire of sched_lock.

Tested by:	John Hay <jhay@icomtek.csir.co.za>
		jhb
2000-11-17 18:09:18 +00:00
jhb
c0bba69cbe Don't release and acquire Giant in mi_switch(). Instead, release and
acquire Giant as needed in functions that call mi_switch().  The releases
need to be done outside of the sched_lock to avoid potential deadlocks
from trying to acquire Giant while interrupts are disabled.

Submitted by:	witness
2000-11-16 02:16:44 +00:00
jhb
5fa45f43dd Argh, add in a missing release of the sched_lock. 2000-11-16 01:16:54 +00:00
jhb
8b193931b5 CURSIG() calls functions that acquire sleep mutexes, so it is not a good
idea to be holding the sched_lock while we are calling it.  As such,
release sched_lock before calling CURSIG() in msleep() and mawait() and
reacquire it after CURSIG() returns.

Submitted by:	witness
2000-11-16 01:07:19 +00:00
jhb
de636b04e8 - Rename await() to mawait(). mawait() is to await() as msleep() is to
tsleep().  Namely, mawait() takes an extra argument which is a mutex
  to drop when going to sleep.  Just as with msleep(), if the priority
  argument includes the PDROP flag, then the mutex will be dropped and will
  not be reacquired when the process wakes up.
- Add in a backwards compatible macro await() that passes in NULL as the
  mutex argument to mawait().
2000-11-15 22:39:35 +00:00
jhb
0efbfa0260 - Replace a KASSERT() that knew too much about mutex internals with a
mtx_assert() that ensures the mutex we release during msleep() is both
  not recursed and owned by the current process.
2000-11-15 22:30:48 +00:00
jhb
c70d0c6d5a - Convert references from tsleep() -> msleep()
- Fix a buglet in a comment above await()
2000-11-15 22:27:38 +00:00
jhb
be4bef8719 - GC some #if 0'd code regarding the non-existant safepri variable.
- Don't dink with the witness state of Giant unless we actually own it
  during mi_switch().
2000-10-20 07:52:10 +00:00
jhb
fd275a78bd - Change fast interrupts on x86 to push a full interrupt frame and to
return through doreti to handle ast's.  This is necessary for the
  clock interrupts to work properly.
- Change the clock interrupts on the x86 to be fast instead of threaded.
  This is needed because both hardclock() and statclock() need to run in
  the context of the current process, not in a separate thread context.
- Kill the prevproc hack as it is no longer needed.
- We really need Giant when we call psignal(), but we don't want to block
  during the clock interrupt.  Instead, use two p_flag's in the proc struct
  to mark the current process as having a pending SIGVTALRM or a SIGPROF
  and let them be delivered during ast() when hardclock() has finished
  running.
- Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways.  It was
  broken on the x86 if it was turned on since cpl is gone.  It's only use
  was to bogusly run softclock() directly during hardclock() rather than
  scheduling an SWI.
- Remove the COM_LOCK simplelock and replace it with a clock_lock spin
  mutex.  Since the spin mutex already handles disabling/restoring
  interrupts appropriately, this also lets us axe all the *_intr() fu.
- Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use
  temporary fast interrupts for the APIC trial.
- Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending
  signals in hardclock() that are to be delivered in ast().

Submitted by:	jakeb (making statclock safe in a fast interrupt)
Submitted by:	cp (concept of delaying signals until ast())
2000-10-06 02:20:21 +00:00
jhb
98932a243c Add a KASSERT() to catch instances where the mutex that we pass in to
msleep() are recursed.

Suggested by:	cp
2000-09-24 00:33:51 +00:00
jhb
ebc05310ca Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, just
use struct mtx, struct witness, and struct witness_blessed.

Requested by:	bde
2000-09-14 20:15:16 +00:00
jake
273d0f5a2d Rename tsleep to msleep and add a mutex argument, which is
released before sleeping and re-acquired before msleep
returns.  A compatibility cpp macro has been provided for
tsleep to avoid changing all occurences of it in the kernel.

Remove an assertion that the Giant mutex be held before
calling tsleep or asleep.

This is intended to serve the same purpose as condition
variables, but does not preclude their addition in the
future.

Approved by:	jasone
Obtained from:	BSD/OS
2000-09-11 00:20:02 +00:00
dfr
14c49fbe71 Fix printf warnings in CTRx calls. 2000-09-10 13:34:35 +00:00
jasone
769e0f974d Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00
phk
e5de271d47 Previous commit changing SYSCTL_HANDLER_ARGS violated KNF.
Pointed out by:	bde
2000-07-04 11:25:35 +00:00
phk
61ff05be25 Style police catches up with rev 1.26 of src/sys/sys/sysctl.h:
Sanitize SYSCTL_HANDLER_ARGS so that simplistic tools can grog our
sources:

        -sysctl_vm_zone SYSCTL_HANDLER_ARGS
        +sysctl_vm_zone (SYSCTL_HANDLER_ARGS)
2000-07-03 09:35:31 +00:00
jake
961b97d434 Back out the previous change to the queue(3) interface.
It was not discussed and should probably not happen.

Requested by:		msmith and others
2000-05-26 02:09:24 +00:00
jake
d93fbc9916 Change the way that the queue(3) structures are declared; don't assume that
the type argument to *_HEAD and *_ENTRY is a struct.

Suggested by:	phk
Reviewed by:	phk
Approved by:	mdodd
2000-05-23 20:41:01 +00:00
grog
747ab40a69 Correct a couple of typos. 2000-05-07 05:09:45 +00:00
green
74f13b7793 Change the scheduler to actually respect the PUSER barrier. It's been
wrong for many years that negative niceness would lower the priority
of a process below PUSER, and once below PUSER, there were conditionals
in the code that are required to test for whether a process was in
the kernel which would break.

The breakage could (and did) cause lock-ups, basically nothing else
but the least nice program being able to run in some conditions.  The
algorithm which adjusts the priority now subtracts PRIO_MIN to do
things properly, and the ESTCPULIM() algorithm was updated to use
PRIO_TOTAL (PRIO_MAX - PRIO_MIN) to calculate the estcpu.

NICE_WEIGHT is now 1 to accomodate the full range of priorities better
(a -20 process with full CPU time has the priority of a +0 process with
no CPU time).  There are now 20 queues (exactly; 80 priorities) for
use in user processes' scheduling, and PUSER has been lowered to 48
to accomplish this.

This means, to the user, that things will be scheduled more correctly
(noticeable), there is no lock-up anymore WRT a niced -20 process
never releasing the CPU time for other processes.  In this fair system,
tsleep()ed < PUSER processes now will get the proper higher priority
than priority >= PUSER user processes.

The detective work of this was done by me, along with part of the
solution.  Luoqi Chen has provided most of the solution, and really
helped me understand what was happening better, to boot :)

Submitted by:   luoqi
Concept reviewed by:    bde
2000-04-30 18:33:43 +00:00
dillon
b852fcb160 The SMP cleanup commit broke UP compiles. Make UP compiles work again. 2000-03-28 18:06:49 +00:00
dillon
689641c1ea Commit major SMP cleanups and move the BGL (big giant lock) in the
syscall path inward.  A system call may select whether it needs the MP
    lock or not (the default being that it does need it).

    A great deal of conditional SMP code for various deadended experiments
    has been removed.  'cil' and 'cml' have been removed entirely, and the
    locking around the cpl has been removed.  The conditional
    separately-locked fast-interrupt code has been removed, meaning that
    interrupts must hold the CPL now (but they pretty much had to anyway).
    Another reason for doing this is that the original separate-lock for
    interrupts just doesn't apply to the interrupt thread mechanism being
    contemplated.

    Modifications to the cpl may now ONLY occur while holding the MP
    lock.  For example, if an otherwise MP safe syscall needs to mess with
    the cpl, it must hold the MP lock for the duration and must (as usual)
    save/restore the cpl in a nested fashion.

    This is precursor work for the real meat coming later: avoiding having
    to hold the MP lock for common syscalls and I/O's and interrupt threads.
    It is expected that the spl mechanisms and new interrupt threading
    mechanisms will be able to run in tandem, allowing a slow piecemeal
    transition to occur.

    This patch should result in a moderate performance improvement due to
    the considerable amount of code that has been removed from the critical
    path, especially the simplification of the spl*() calls.  The real
    performance gains will come later.

Approved by: jkh
Reviewed by: current, bde (exception.s)
Some work taken from: luoqi's patch
2000-03-28 07:16:37 +00:00
dufault
705c38904d I applied the wrong patch set. Back out anything associated
with the known bogus currtpriority.  This undoes the previous changes to
sys/i386/i386/trap.c, sys/alpha/alpha/trap.c, sys/sys/systm.h

Now we have the patch set approved by bde.

Approved by:	bde
2000-03-02 22:03:49 +00:00
dufault
0bdb67cb26 Patches that eliminate extra context switches in FIFO case.
Fixes p1003_1b regression test in the simple case of no RR and
FIFO processes competing.

Reviewed by:	jkh, bde
2000-03-02 16:20:07 +00:00
peter
031f01d30f Don't make the ktrace hook in tsleep() deref a null curproc after a panic.
PR:		15169
Submitted by:	David Gilbert <dgilbert@velocet.ca>
1999-11-30 09:01:46 +00:00
phk
8da3ba86dc Add a bit of sanity checking and problem avoidance in case the
timecounter hardware is bogus.

This will produce a new warning "microuptime() went backwards"
and try to not screw up the process resource accounting.
1999-11-29 11:29:04 +00:00
bde
1ad19bea4d Scheduler fixes equivalent to the ones logged in the following NetBSD
commit to kern_synch.c:

  ----------------------------
  revision 1.55
  date: 1999/02/23 02:56:03;  author: ross;  state: Exp;  lines: +39 -10
  Scheduler bug fixes and reorganization
  * fix the ancient nice(1) bug, where nice +20 processes incorrectly
    steal 10 - 20% of the CPU, (or even more depending on load average)
  * provide a new schedclk() mechanism at a new clock at schedhz, so high
    platform hz values don't cause nice +0 processes to look like they are
    niced
  * change the algorithm slightly, and reorganize the code a lot
  * fix percent-CPU calculation bugs, and eliminate some no-op code

  === nice bug === Correctly divide the scheduler queues between niced and
  compute-bound processes. The current nice weight of two (sort of, see
  `algorithm change' below) neatly divides the USRPRI queues in half; this
  should have been used to clip p_estcpu, instead of UCHAR_MAX.  Besides
  being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
  and it was done after decay_cpu() which can only _reduce_ the value.  It
  has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
  scheduler-penalize themselves onto the same queue as nice +20 processes.
  (Or even a higher one.)

  === New schedclk() mechansism === Some platforms should be cutting down
  stathz before hitting the scheduler, since the scheduler algorithm only
  works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
  back and forth by 4 every time p_estcpu is touched (each occurance an
  abstraction violation), use p_estcpu without scaling and require schedhz
  to be generated directly at the right frequency. Use a default stathz (well,
  actually, profhz) / 4, so nothing changes unless a platform defines schedhz
  and a new clock.  Define these for alpha, where hz==1024, and nice was
  totally broke.

  === Algorithm change === The nice value used to be added to the
  exponentially-decayed scheduler history value p_estcpu, in _addition_ to
  be incorporated directly (with greater wieght) into the priority calculation.
  At first glance, it appears to be a pointless increase of 1/8 the nice
  effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
  because it will ramp up linearly but be decayed only exponentially, thus
  converging to an additional .75 nice for a loadaverage of one. I killed
  this, it makes the behavior hard to control, almost impossible to analyze,
  and the effect (~~nothing at for the first second, then somewhat increased
  niceness after three seconds or more, depending on load average) pointless.

  === Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
  Collect scheduler functionality. Try to put each abstraction in just one
  place.
  ----------------------------

The details are a little different in FreeBSD:

=== nice bug ===   Fixing this is the main point of this commit.  We use
essentially the same clipping rule as NetBSD (our limit on p_estcpu
differs by a scale factor).  However, clipping at all is fundamentally
bad.  It gives free CPU the hoggiest hogs once they reach the limit, and
reaching the limit is normal for long-running hogs.  This will be fixed
later.

=== New schedclk() mechanism ===  We don't use the NetBSD schedclk()
(now schedclock()) mechanism.  We require (real)stathz to be about 128
and scale by an extra factor of 2 compared with NetBSD's statclock().
We scale p_estcpu instead of scaling the clock.  This is more accurate
and flexible.

=== Algorithm change ===  Same change.

=== Other bugs ===  The p_pctcpu bug was fixed long ago.  We don't try as
hard to abstract functionality yet.

Related changes: the new limit on p_estcpu must be exported to kern_exit.c
for clipping in wait1().

Agreed with by:		dufault
1999-11-28 12:12:13 +00:00
bde
0f795adedc Updated comments for the move in the previous commit. 1999-11-27 15:27:11 +00:00
bde
4955977bad Moved scheduling-related code to kern_synch.c so that it is easier to fix
and extend.  The new function containing the code is named schedclock()
as in NetBSD, but it has slightly different semantics (it already handles
incrementation of p->p_cpticks, and it should handle any calling frequency).

Agreed with in principle by:	dufault
1999-11-27 12:32:27 +00:00
phk
8fca18de89 This is a partial commit of the patch from PR 14914:
Alot of the code in sys/kern directly accesses the *Q_HEAD and *Q_ENTRY
   structures for list operations.  This patch makes all list operations
   in sys/kern use the queue(3) macros, rather than directly accessing the
   *Q_{HEAD,ENTRY} structures.

This batch of changes compile to the same object files.

Reviewed by:    phk
Submitted by:   Jake Burkholder <jake@checker.org>
PR:     14914
1999-11-16 10:56:05 +00:00
marcel
d5e8d714b9 sigset_t change (part 2 of 5)
-----------------------------

The core of the signalling code has been rewritten to operate
on the new sigset_t. No methodological changes have been made.
Most references to a sigset_t object are through macros (see
signalvar.h) to create a level of abstraction and to provide
a basis for further improvements.

The NSIG constant has not been changed to reflect the maximum
number of signals possible. The reason is that it breaks
programs (especially shells) which assume that all signals
have a non-null name in sys_signame. See src/bin/sh/trap.c
for an example. Instead _SIG_MAXSIG has been introduced to
hold the maximum signal possible with the new sigset_t.

struct sigprop has been moved from signalvar.h to kern_sig.c
because a) it is only used there, and b) access must be done
though function sigprop(). The latter because the table doesn't
holds properties for all signals, but only for the first NSIG
signals.

signal.h has been reorganized to make reading easier and to
add the new and/or modified structures. The "old" structures
are moved to signalvar.h to prevent namespace polution.

Especially the coda filesystem suffers from the change, because
it contained lines like (p->p_sigmask == SIGIO), which is easy
to do for integral types, but not for compound types.

NOTE: kdump (and port linux_kdump) must be recompiled.

Thanks to Garrett Wollman and Daniel Eischen for pressing the
importance of changing sigreturn as well.
1999-09-29 15:03:48 +00:00
peter
3b842d34e8 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
peter
eac345f15d Don't initialize run queues here, do it all in one place. 1999-08-19 00:14:43 +00:00
bde
79ceaf95c2 The magic "no-cpu" cpu number is 0xff. Don't misrepresent cpu
numbers as chars or use bogus casts in an attempt to unmisrepresnt
them.  In top, don't assume that 0xff is the only negative cpu
number when cpu numbers are (mis)represented.
1999-03-05 16:38:13 +00:00
julian
ee38b91324 The tunable parameter for the scheduler quantum was inverted.
Higher numbers led to smaller quanta.
In discussion with BDE, change this parameter to be in uSecs
to make it machine independent,
and limit it to non zero multiples of 'tick' (rounding down).
Also make the variabel globally available so that the present function that
returns its value (used for posix scheduling I believe) can go away.

Submitted by: Bruce Evans <bde@freebsd.org>
1999-03-03 18:15:29 +00:00
bde
6d8d63664b Removed all traces of `p_switchtime'. The relevant timestamp is per-cpu,
not per-process.  Keep it in `switchtime' consistently.

It is now clear that the timestamp is always valid in fork_trampoline()
except when the child is running on a previously idle cpu, which
can only happen if there are multiple cpus, so don't check or set
the timestamp in fork_trampoline except in the (i386) SMP case.
Just remove the alpha code for setting it unconditionally, since
there is no SMP case for alpha and the code had rotted.

Parts reviewed by:	dfr, phk
1999-02-28 10:53:29 +00:00
bde
d51135c0c3 Improved scheduling in uiomove(), etc. resched_wanted() is true too
often for it to be a good criterion for switching kernel cpu hogs --
it is true after most wakeups.  Use the criterion "has been running
for >= 2 quanta" instead.
1999-02-22 16:57:48 +00:00
eivind
89e1199534 KNFize, by bde. 1999-01-10 01:58:29 +00:00
eivind
a8dc66f457 Split DIAGNOSTIC -> DIAGNOSTIC, INVARIANTS, and INVARIANT_SUPPORT as
discussed on -hackers.

Introduce 'KASSERT(assertion, ("panic message", args))' for simple
check + panic.

Reviewed by:	msmith
1999-01-08 17:31:30 +00:00
dillon
953800406c Add asleep() and await() support. Currently highly experimental. A
small support structure had to be added to the proc structure, and
    a few minor conditional panics no longer apply.
1998-12-21 07:41:51 +00:00
dg
b4bceb0b07 Compare p_cpulimit with RLIM_INFINITY before comparing it with the process
runtime. p_runtime is unsigned while p_cpulimit is not, so this avoids the
nasty side effect of the process getting killed when the runtime comes up
"negative" due to other bugs.
1998-11-27 11:44:22 +00:00
bde
8fdbb5fce3 Fixed the previous fix - stathz doesn't give the statclock frequency
when it is 0.

Submitted by:	mostly by Hidetoshi Shimokawa <simokawa@sat.t.u-tokyo.ac.jp>
1998-11-26 16:49:55 +00:00
bde
043d2a6202 Oops, yet again back out some local changes that shouldn't have been
in the previous commit.
1998-11-26 14:05:58 +00:00
bde
0d3ca540ea Fixed scaling of p_pctcpu. It was wrong by a factor of stathz/hz.
Until recently, this was half compensated for in at least ps and top
by multiplying by 100/stathz to get a better wrong factor of 100/hz.
1998-11-26 14:00:08 +00:00
bde
2fee4bebdc Oops, back out some local changes that shouldn't have been in the
previous commit.
1998-10-25 20:11:36 +00:00