Commit Graph

67 Commits

Author SHA1 Message Date
jeff
2813e83639 - Use a better algorithm in sched_pctcpu_update()
Contributed by:	Thomaswuerfl@gmx.de

 - In sched_prio(), adjust the run queue for threads which may need to move
   to the current queue due to priority propagation .
 - In sched_switch(), fix style bug introduced when the KSE support went in.
   Columns are 80 chars wide, not 90.
 - In sched_switch(), Fix the comparison in the idle case and explicitly
   re-initialize the runq in the not propagated case.
 - Remove dead code in sched_clock().
 - In sched_clock(), If we're an IDLE class td set NEEDRESCHED so that threads
   that have become runnable will get a chance to.
 - In sched_runnable(), if we're not the IDLETD, we should not consider
   curthread when examining the load.  This mimics the 4BSD behavior of
   returning 0 when the only runnable thread is running.
 - In sched_userret(), remove the code for setting NEEDRESCHED entirely.
   This is not necessary and is not implemented in 4BSD.
 - Use the correct comparison in sched_add() when checking to see if an idle
   prio task has had it's priority temporarily elevated.
2003-10-27 06:47:05 +00:00
jeff
d191470e5b - If a thread is not bound to a kse return 0 from sched_pctcpu().
Reported by:	 pawel.worach@nordea.com
2003-10-20 19:55:21 +00:00
jeff
43fc476731 - Only kse_reassign() in the !running case.
Reported by:	kris
2003-10-16 20:32:57 +00:00
jeff
bd49f43790 - Call sched_add() with the correct argument on SMP.
Reported by:	Valentin Chopov <valentin@valcho.net>
2003-10-16 20:06:19 +00:00
jeff
ae9dcb1e7c - Fix a minor problem with my last commit, we don't want to return from
sched_switch if the thread is running, we want to fall through and pick
   a new thread because we have been preempted.
2003-10-16 10:04:54 +00:00
jeff
c7b38e5072 - Collapse sched_switchin() and sched_switchout() into sched_switch(). Now
mi_switch() calls sched_switch() which calls cpu_switch().  This is
   actually one less function call than it had been.
2003-10-16 08:53:46 +00:00
jeff
0e11a68932 - Update the sched api. sched_{add,rem,clock,pctcpu} now all accept a td
argument rather than a kse.
2003-10-16 08:39:15 +00:00
jeff
da51f9561a - The non iterative algorithm for interact_update was broken due to
rounding errors.  This was the source of the majority of the
   interactivity problems.  Reintroduce the old algorithm and its XXX.
 - Up the interactivity threshold to 30.  It really could stand to be even
   a tiny bit higher.
 - Let the sleep and run time accumulate up to 5 seconds of history rather
   than two.  This helps stop XFree86 from becoming non-interactive during
   bursts of activity.
2003-10-16 08:17:43 +00:00
jeff
7c8fe7f222 - If our user_pri doesn't match our actual priority our priority has been
elevated either due to priority propagation or because we're in the
   kernel in either case, put us on the current queue so that we dont
   stop others from using important resources.  At some point the priority
   elevations from sleeping in the kernel should go away.
 - Remove an optimization in sched_userret().  Before we would only set
   NEEDRESCHED if there was something of a higher priority available.  This
   is a trivial optimization and it breaks priority propagation because it
   doesn't take threads which we may be blocking into account.  Notice that
   the thread which is blocking others gets up to one tick of cpu time before
   we honor this NEEDRESCHED in sched_clock().
2003-10-15 07:47:06 +00:00
jeff
a483e85d3a - In SCHED_CURR() add holding Giant to the list of criteria that will keep
you on the current queue.  In the future, it would be nice if priority
   propagation could deterministicly pluck a thread off of the next queue
   and put it on the current queue.  Until then this hack stops us from
   holding up our entire current queue, including interrupt handlers, while
   a thread on the next queue is blocked while holding Giant.
 - Inherit our pctcpu information from our parent.
2003-10-12 21:07:31 +00:00
jeff
7450fbaf7c - Change a lame iterative algorithm to a constant time algorithm. Remove
the XXX that complains about it as well.

Submitted by:	ThomasWuerfl@gmx.de
2003-10-04 17:41:13 +00:00
jeff
6d14292e55 - Somewhere along the line I stupidly removed critical logic from
sched_ptcpu_update().  This caused erroneous cpu times in TOP for
   processes that were asleep.  Replace the code that was removed.
2003-09-20 02:05:58 +00:00
davidxu
f840df8e8a Let SA process work under ULE scheduler, originally it would panic kernel.
Reviewed by: jeff
2003-08-26 11:33:15 +00:00
sam
77a4adf88f Change instances of callout_init that specify MPSAFE behaviour to
use CALLOUT_MPSAFE instead of "1" for the second parameter.  This
does not change the behaviour; it just makes the intent more clear.
2003-08-19 17:51:11 +00:00
jeff
3214436a8e - When stealing a kse in kseq_move() ignore the current kseq's min nice
value.  We want to steal any thread, even one that is not given a slice
   on its current queue.
2003-07-08 06:19:40 +00:00
jeff
530634290e - Clean up an unused variable.
Submitted by:	Steve Kargl <skg@routmask.apl.washington.edu>
2003-07-07 21:08:28 +00:00
jeff
9cf0862530 - Parse the cpu topology map in sched_setup().
- Associate logical CPUs on the same physical core with the same kseq.
 - Adjust code that assumed there would only be one running thread in any
   kseq.
 - Wrap the HTT code with a ULE_HTT_EXPERIMENTAL ifdef.  This is a start
   towards HyperThreading support but it isn't quite there yet.
2003-07-04 19:59:00 +00:00
jeff
d6b1383aa5 - Don't migrate to stopped cpus. 2003-06-28 09:09:33 +00:00
jeff
2ead0e377a - If smp is not started yet don't try to load balance or we'll put threads
on cpus that aren't running yet.
2003-06-28 08:24:42 +00:00
jeff
f2d12a9a0f - Throttle the inherited sleep and run time in sched_fork_kseg(). This
allows us to learn the behavior of a thread much more quickly after it
   starts up.
2003-06-28 06:19:56 +00:00
jeff
de4acd13c4 - Adjust the default maximum slice value to ~140ms. This has improved the
nice distribution without significantly impacting interactive response.
   As a side effect it should also allow batch processes to run for a
   slightly longer period which will positively impact their performance.
2003-06-28 06:04:47 +00:00
jeff
ef73855fa0 - lticks was erroneously being updated in sched_pctcpu(). This was causing
us to skip the pctcpu_update() call which lead to inaccurate cpu usage
   statistics for processes that didn't run often.
2003-06-21 02:31:49 +00:00
jeff
66f14a8109 - Don't allow nice to have such a large effect on priority. This was
causing poor interactive performance while unnice processes were running.
   The new scheme still allows nice to have an effect on priority but it is
   not as dramatic as the effect of the interactivity score.
2003-06-21 02:22:47 +00:00
jeff
ae5a61d3ab - Use a more robust mechanism for determining whether or not a kse is on a
kseq.
2003-06-17 19:49:18 +00:00
jeff
ef2a619ae8 - Temporarily patch a problem where the interact score could be negative
because the run time exceeds the largest value a signed int can hold.
   The real solution involves calculating how far we are over the limit.
   To quickly solve this problem we loop removing 1/5th of the current value
   until it falls below the limit.  The common case requires no passes.
2003-06-17 10:21:34 +00:00
jeff
81d8e072f0 - Add a new function "sched_interact_update()" that scales back the sleep
and run time.
 - Scale the sleep and run time back via sched_interact_update() in more
   places.  This is to keep the statistic more accurate.
 - Charge a parent one tick for forking a child.
 - Add only the run time and not the sleep time to the parents kg when a
   thread exits.  This allows us to give a penalty for having an expensive
   thread exit but does not give a bonus for having an interactive thread
   exit.
 - Change the SLP_RUN_THROTTLE to limit us to 4/5th and not 1/2.
 - Change the SLP_RUN_MAX to two seconds.  This keeps bursty interactive
   applications like mozilla and openoffice in the interactive range even
   through expensive tasks.
 - Recalculate the slice after every sleep.  This ensures that once a task
   has been marked interactive it only has a slice of 1 at the risk of
   giving tasks that sleep for a very brief period a longer time slice.
2003-06-17 06:39:51 +00:00
jeff
7b6397c86b - Increase the ksegrp's cpu time history buffer to 250ms.
- Decrease the history buffer divisor to 2 so that we remember more of the
   old behavior.
2003-06-15 04:14:25 +00:00
jeff
90c6fbd963 - Cap the growth of sleep and run time in sched_exit_kse(). 2003-06-15 02:52:29 +00:00
jeff
f3725f3c00 - Fix the maximum slice value. I accidentally checked in a value of '2'
which meant no process would run for longer than 20ms.
 - Slightly redo the interactivity scorer.  It follows the same algorithm but
   in a slightly more correct way.  Previously values above half were
   incorrect.
 - Lower the interactivity threshold to 20.  It seems that in testing non-
   interactive tasks are hardly ever near there and expensive interactive
   tasks can sometimes surpass it.  This area needs more testing.
 - Remove an unnecessary KTR.
 - Fix a case where an idle thread that had an elevated priority due to
   priority prop. would be placed back on the idle queue.
 - Delay setting NEEDRESCHED until userret() for threads that haad their
   priority elevated while in kernel.  This gives us the same context switch
   optimization as SCHED_4BSD.
 - Limit the child's slice to 1 in sched_fork_kse() so we detect its behavior
   more quickly.
 - Inhert some of the run/slp time from the child in sched_exit_ksegrp().
 - Redo some of the priority comparisons so they are more clear.
 - Throttle the frequency of sched_pctcpu_update() so that rounding errors
   do not make it invalid.
2003-06-15 02:18:29 +00:00
davidxu
95b64acdb5 Rename P_THREADED to P_SA. P_SA means a process is using scheduler
activations.
2003-06-15 00:31:24 +00:00
obrien
97ddbdc5a1 Use __FBSDID(). 2003-06-11 00:56:59 +00:00
jeff
db4a0ae4f4 - Add a simple CPU load balancing algorithm. This works by executing once a
second and equalizing the load between the two most imbalanced CPU.  This
   is intended to clear up long term load imbalances that would not be handled
   by the 'pull' method in sched_choose().
 - Pull out some bits of sched_choose() into a kseq_move() function that moves
   an arbitrary thread from one kseq to another.
2003-06-09 00:39:09 +00:00
jeff
c7be57cd7e - When a new thread is added to a kseq the load is incremented prior to
adding it to the nice tables.  Therefore, in kseq_add_nice, we should
   keep in mind that the load will be 1 if we are the only thread, and not
   0.
 - Assert that the sched lock is held in all the appropriate places.
 - Increase the scope of the sched lock in sched_pctcpu_update().
 - Hold the sched lock in sched_runnable().  It is not held by the caller.
2003-06-08 00:47:33 +00:00
julian
654d95ff42 Fix typo in last commit 2003-05-02 06:18:55 +00:00
julian
bbccc3961f Move the flag that indicates an idle thread from the KSE to the thread.
It was always referenced via the thread anyhow.

Reviewed by:	jhb (a LOOOOONG time ago)
2003-05-02 00:33:12 +00:00
jhb
15222a2a29 Add lock assertions for various proc/thread/kse/ksegroup fields to the
scheduler functions.
2003-04-23 18:51:05 +00:00
jhb
0bb36edd62 - Assert that the proc lock and sched_lock are held in sched_nice().
- For the 4BSD scheduler, this means that all callers of the static
  function resetpriority() now always hold sched_lock, so don't lock
  sched_lock explicitly in that function.
2003-04-22 20:50:38 +00:00
jhb
2ecb0ccd7c Protect p_swtime with the sched_lock. 2003-04-22 19:48:25 +00:00
jeff
66f457b961 - Set the ke_cpu field in sched_add() for interrupt and realtime threads
since they are going on the current cpu and not their previously assigned
   cpu.
 - sched_runnable() should only return true in the SMP case if the other
   processor has more than one thread that is runnable.  We can not steal
   curthread.
 - Change kseq_print() to accept the cpuid instead of a kseq pointer.  This
   makes use of this function in ddb much easier.
2003-04-18 05:24:10 +00:00
jeff
e8b4ae9cf6 - Unbreak priority prop. for timeshare threads. Always place something on
the current queue if its priority is really elevated.  This needs more work
   as there are cases where a next queue kse could be holding up what would
   be a curr queue kse, and thus hurting interactivity.  Also, when a thread
   with an elevated priority has its priority lowered it should be placed
   back on the next queue.
2003-04-12 22:33:24 +00:00
jeff
880e924e79 - Clean up some debug code left over from my earlier megacommit. 2003-04-12 07:28:36 +00:00
jeff
1c8ce22e5d - We only care about the base priority. Ignore the SCHED_FIFO_BIT so that
we dont get confused.

Reported and debugged by:	Steve Kargl <sgk@troutmask.apl.washington.edu>
2003-04-12 07:00:16 +00:00
jeff
c83a39ed5a - Add sched_exit_*
- Call sched_exit_kse() from sched_exit() instead of implementing it here.
2003-04-11 19:24:00 +00:00
jeff
d1510a551a - Only select kseqs with more than one kse to steal. The running kse
is reflected in the load now and you can't very well migrate that.
2003-04-11 18:40:34 +00:00
jeff
da4a67a7ac - When migrating a kse from one kseq to the next actually insert it onto
the second kseq's run queue so that it is referenced by the kse when
   it is switched out.
 - Spell ksq_rslices properly.

Reported by:	Ian Freislich <ianf@za.uu.net>
2003-04-11 18:37:34 +00:00
jeff
1ea1f7c6ec - Add a SYSCTL node for the ule scheduler.
- Allow user adjustable min and max time slices (suggested by hiten).
 - Change the SLP_RUN_MAX to 100ms from 2 seconds so that we learn whether a
   process is interactive or not much more quickly.
 - Place a process on the current run queue if it is interactive or if it is
   running at an interrupt thread priority due to priority prop.
 - Use the 'current' timeshare queue for interrupt threads, realtime threads,
   and idle threads that are running at higher priority due to priority prop.
   This fixes problems where priorities would have been elevated but we would
   not check the timeshare run queue until other lower priority tasks were
   no longer runnable.
 - Keep an array of loads indexed by the priority class as well as a global
   load.
 - Keep an bucket of nice values with a count of the number of kses currently
   runnable with that nice value.
 - Keep track of the minimum nice value of any running thread.
 - Remove the unused short term sleep accounting.  I was attempting to use
   this for load balancing but it didn't work out.
 - Define a kseq_print() for use with debugging.
 - Add KTR debugging at useful places so we can easily debug slice and
   priority assignment.
 - Decouple the runq assignment from the kseq assignment.  kseq_add now keeps
   track of statistics.  This is done so that the nice and load is still
   tracked for the currently running process.  Previously if a niced process
   was added while a non nice process was running the niced process would
   still get a slice since it was not aware of the unnice process.
 - Make adjustments for the sched api changes.
2003-04-11 03:47:14 +00:00
julian
bb19b2619b Move the _oncpu entry from the KSE to the thread.
The entry in the KSE still exists but it's purpose will change a bit
when we add the ability to lock a KSE to a cpu.
2003-04-10 17:35:44 +00:00
jeff
b4478d69bd - Keep seperate statistics and run queues for different scheduling classes.
- Treat each class specially in kseq_{choose,add,rem}.  Let the rest of the
   code be less aware of scheduling classes.
 - Skip the interactivity calculation for non TIMESHARE ksegrps.
 - Move slice and runq selection into kseq_add().  Uninline it now that it's
   big.
2003-04-03 00:29:28 +00:00
jeff
78355bc82c - Make the interactivity calculator decay faster.
- Make the pcpu estimator update faster.
2003-04-02 08:22:33 +00:00
jeff
73a954744a - I meant divide by two and not shift by two in SCHED_PRI_NHALF. 2003-04-02 08:21:24 +00:00