2002-10-12 05:32:24 +00:00
|
|
|
/*-
|
|
|
|
* Copyright (c) 1982, 1986, 1990, 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2005-06-24 00:16:57 +00:00
|
|
|
#include "opt_hwpmc_hooks.h"
|
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
#define kse td_sched
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/ktr.h>
|
|
|
|
#include <sys/lock.h>
|
2003-12-26 17:07:29 +00:00
|
|
|
#include <sys/kthread.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/mutex.h>
|
|
|
|
#include <sys/proc.h>
|
|
|
|
#include <sys/resourcevar.h>
|
|
|
|
#include <sys/sched.h>
|
|
|
|
#include <sys/smp.h>
|
|
|
|
#include <sys/sysctl.h>
|
|
|
|
#include <sys/sx.h>
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
#include <sys/turnstile.h>
|
2006-06-29 19:37:31 +00:00
|
|
|
#include <machine/pcb.h>
|
2004-09-03 08:19:31 +00:00
|
|
|
#include <machine/smp.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
#include <sys/pmckern.h>
|
|
|
|
#endif
|
|
|
|
|
2002-11-21 09:14:13 +00:00
|
|
|
/*
|
|
|
|
* INVERSE_ESTCPU_WEIGHT is only suitable for statclock() frequencies in
|
|
|
|
* the range 100-256 Hz (approximately).
|
|
|
|
*/
|
|
|
|
#define ESTCPULIM(e) \
|
|
|
|
min((e), INVERSE_ESTCPU_WEIGHT * (NICE_WEIGHT * (PRIO_MAX - PRIO_MIN) - \
|
|
|
|
RQ_PPQ) + INVERSE_ESTCPU_WEIGHT - 1)
|
2003-11-09 13:45:54 +00:00
|
|
|
#ifdef SMP
|
|
|
|
#define INVERSE_ESTCPU_WEIGHT (8 * smp_cpus)
|
|
|
|
#else
|
2002-11-21 09:14:13 +00:00
|
|
|
#define INVERSE_ESTCPU_WEIGHT 8 /* 1 / (priorities per estcpu level). */
|
2003-11-09 13:45:54 +00:00
|
|
|
#endif
|
2002-11-21 09:14:13 +00:00
|
|
|
#define NICE_WEIGHT 1 /* Priorities per nice level. */
|
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
/*
|
|
|
|
* The schedulable entity that can be given a context to run.
|
|
|
|
* A process may have several of these. Probably one per processor
|
2006-07-02 20:53:52 +00:00
|
|
|
* but possibly a few more. In this universe they are grouped
|
2004-09-05 02:09:54 +00:00
|
|
|
* with a KSEG that contains the priority and niceness
|
|
|
|
* for the group.
|
|
|
|
*/
|
|
|
|
struct kse {
|
|
|
|
TAILQ_ENTRY(kse) ke_procq; /* (j/z) Run queue. */
|
|
|
|
struct thread *ke_thread; /* (*) Active associated thread. */
|
|
|
|
fixpt_t ke_pctcpu; /* (j) %cpu during p_swtime. */
|
2006-06-06 12:26:17 +00:00
|
|
|
u_char ke_rqindex; /* (j) Run queue index. */
|
2004-09-05 02:09:54 +00:00
|
|
|
enum {
|
|
|
|
KES_THREAD = 0x0, /* slaved to thread state */
|
|
|
|
KES_ONRUNQ
|
|
|
|
} ke_state; /* (j) KSE status. */
|
|
|
|
int ke_cpticks; /* (j) Ticks of cpu time. */
|
|
|
|
struct runq *ke_runq; /* runq the kse is currently on */
|
2003-01-12 19:04:49 +00:00
|
|
|
};
|
2004-09-05 02:09:54 +00:00
|
|
|
|
|
|
|
#define ke_proc ke_thread->td_proc
|
|
|
|
#define ke_ksegrp ke_thread->td_ksegrp
|
|
|
|
|
|
|
|
#define td_kse td_sched
|
|
|
|
|
|
|
|
/* flags kept in td_flags */
|
|
|
|
#define TDF_DIDRUN TDF_SCHED0 /* KSE actually ran. */
|
|
|
|
#define TDF_EXIT TDF_SCHED1 /* KSE is being killed. */
|
|
|
|
#define TDF_BOUND TDF_SCHED2
|
|
|
|
|
|
|
|
#define ke_flags ke_thread->td_flags
|
|
|
|
#define KEF_DIDRUN TDF_DIDRUN /* KSE actually ran. */
|
|
|
|
#define KEF_EXIT TDF_EXIT /* KSE is being killed. */
|
|
|
|
#define KEF_BOUND TDF_BOUND /* stuck to one CPU */
|
2004-01-25 08:00:04 +00:00
|
|
|
|
|
|
|
#define SKE_RUNQ_PCPU(ke) \
|
|
|
|
((ke)->ke_runq != 0 && (ke)->ke_runq != &runq)
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
struct kg_sched {
|
|
|
|
struct thread *skg_last_assigned; /* (j) Last thread assigned to */
|
|
|
|
/* the system scheduler. */
|
|
|
|
int skg_avail_opennings; /* (j) Num KSEs requested in group. */
|
|
|
|
int skg_concurrency; /* (j) Num KSEs requested in group. */
|
|
|
|
};
|
|
|
|
#define kg_last_assigned kg_sched->skg_last_assigned
|
|
|
|
#define kg_avail_opennings kg_sched->skg_avail_opennings
|
|
|
|
#define kg_concurrency kg_sched->skg_concurrency
|
|
|
|
|
2004-10-05 21:10:44 +00:00
|
|
|
#define SLOT_RELEASE(kg) \
|
|
|
|
do { \
|
|
|
|
kg->kg_avail_opennings++; \
|
|
|
|
CTR3(KTR_RUNQ, "kg %p(%d) Slot released (->%d)", \
|
2006-07-02 20:53:52 +00:00
|
|
|
kg, \
|
|
|
|
kg->kg_concurrency, \
|
|
|
|
kg->kg_avail_opennings); \
|
2004-10-05 21:10:44 +00:00
|
|
|
/* KASSERT((kg->kg_avail_opennings <= kg->kg_concurrency), \
|
|
|
|
("slots out of whack"));*/ \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define SLOT_USE(kg) \
|
|
|
|
do { \
|
|
|
|
kg->kg_avail_opennings--; \
|
|
|
|
CTR3(KTR_RUNQ, "kg %p(%d) Slot used (->%d)", \
|
2006-07-02 20:53:52 +00:00
|
|
|
kg, \
|
|
|
|
kg->kg_concurrency, \
|
|
|
|
kg->kg_avail_opennings); \
|
2004-10-05 21:10:44 +00:00
|
|
|
/* KASSERT((kg->kg_avail_opennings >= 0), \
|
|
|
|
("slots out of whack"));*/ \
|
|
|
|
} while (0)
|
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
/*
|
|
|
|
* KSE_CAN_MIGRATE macro returns true if the kse can migrate between
|
2004-02-01 06:20:18 +00:00
|
|
|
* cpus.
|
2004-01-25 08:00:04 +00:00
|
|
|
*/
|
|
|
|
#define KSE_CAN_MIGRATE(ke) \
|
2004-09-11 10:07:22 +00:00
|
|
|
((ke)->ke_thread->td_pinned == 0 && ((ke)->ke_flags & KEF_BOUND) == 0)
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
static struct kse kse0;
|
|
|
|
static struct kg_sched kg_sched0;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-02-01 02:46:47 +00:00
|
|
|
static int sched_tdcnt; /* Total runnable threads in the system. */
|
2002-10-12 05:32:24 +00:00
|
|
|
static int sched_quantum; /* Roundrobin scheduling quantum in ticks. */
|
2003-03-24 15:16:21 +00:00
|
|
|
#define SCHED_QUANTUM (hz / 10) /* Default sched quantum */
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
static struct callout roundrobin_callout;
|
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
static void slot_fill(struct ksegrp *kg);
|
|
|
|
static struct kse *sched_choose(void); /* XXX Should be thread * */
|
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static void setup_runqs(void);
|
2002-10-12 05:32:24 +00:00
|
|
|
static void roundrobin(void *arg);
|
2003-12-26 17:07:29 +00:00
|
|
|
static void schedcpu(void);
|
2004-01-25 08:00:04 +00:00
|
|
|
static void schedcpu_thread(void);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void sched_priority(struct thread *td, u_char prio);
|
2002-10-12 05:32:24 +00:00
|
|
|
static void sched_setup(void *dummy);
|
|
|
|
static void maybe_resched(struct thread *td);
|
|
|
|
static void updatepri(struct ksegrp *kg);
|
|
|
|
static void resetpriority(struct ksegrp *kg);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void resetpriority_thread(struct thread *td, struct ksegrp *kg);
|
2004-09-03 09:19:49 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
static int forward_wakeup(int cpunum);
|
2004-09-03 09:19:49 +00:00
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static struct kproc_desc sched_kp = {
|
|
|
|
"schedcpu",
|
|
|
|
schedcpu_thread,
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
SYSINIT(schedcpu, SI_SUB_RUN_SCHEDULER, SI_ORDER_FIRST, kproc_start, &sched_kp)
|
|
|
|
SYSINIT(sched_setup, SI_SUB_RUN_QUEUE, SI_ORDER_FIRST, sched_setup, NULL)
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Global run queue.
|
|
|
|
*/
|
|
|
|
static struct runq runq;
|
2004-01-25 08:00:04 +00:00
|
|
|
|
|
|
|
#ifdef SMP
|
|
|
|
/*
|
|
|
|
* Per-CPU run queues
|
|
|
|
*/
|
|
|
|
static struct runq runq_pcpu[MAXCPU];
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static void
|
|
|
|
setup_runqs(void)
|
|
|
|
{
|
|
|
|
#ifdef SMP
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < MAXCPU; ++i)
|
|
|
|
runq_init(&runq_pcpu[i]);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
runq_init(&runq);
|
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
static int
|
|
|
|
sysctl_kern_quantum(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
|
|
|
int error, new_val;
|
|
|
|
|
|
|
|
new_val = sched_quantum * tick;
|
|
|
|
error = sysctl_handle_int(oidp, &new_val, 0, req);
|
|
|
|
if (error != 0 || req->newptr == NULL)
|
|
|
|
return (error);
|
|
|
|
if (new_val < tick)
|
|
|
|
return (EINVAL);
|
|
|
|
sched_quantum = new_val / tick;
|
|
|
|
hogticks = 2 * sched_quantum;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2004-07-23 23:09:00 +00:00
|
|
|
SYSCTL_NODE(_kern, OID_AUTO, sched, CTLFLAG_RD, 0, "Scheduler");
|
2004-06-21 22:05:46 +00:00
|
|
|
|
2004-07-23 23:09:00 +00:00
|
|
|
SYSCTL_STRING(_kern_sched, OID_AUTO, name, CTLFLAG_RD, "4BSD", 0,
|
|
|
|
"Scheduler name");
|
2004-06-21 22:05:46 +00:00
|
|
|
|
2004-07-23 23:09:00 +00:00
|
|
|
SYSCTL_PROC(_kern_sched, OID_AUTO, quantum, CTLTYPE_INT | CTLFLAG_RW,
|
|
|
|
0, sizeof sched_quantum, sysctl_kern_quantum, "I",
|
|
|
|
"Roundrobin scheduling quantum in microseconds");
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
/* Enable forwarding of wakeups to all other cpus */
|
|
|
|
SYSCTL_NODE(_kern_sched, OID_AUTO, ipiwakeup, CTLFLAG_RD, NULL, "Kernel SMP");
|
|
|
|
|
2004-09-05 02:19:53 +00:00
|
|
|
static int forward_wakeup_enabled = 1;
|
2004-09-03 07:42:31 +00:00
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, enabled, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_enabled, 0,
|
|
|
|
"Forwarding of wakeup to idle CPUs");
|
|
|
|
|
|
|
|
static int forward_wakeups_requested = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, requested, CTLFLAG_RD,
|
|
|
|
&forward_wakeups_requested, 0,
|
|
|
|
"Requests for Forwarding of wakeup to idle CPUs");
|
|
|
|
|
|
|
|
static int forward_wakeups_delivered = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, delivered, CTLFLAG_RD,
|
|
|
|
&forward_wakeups_delivered, 0,
|
|
|
|
"Completed Forwarding of wakeup to idle CPUs");
|
|
|
|
|
2004-09-05 02:19:53 +00:00
|
|
|
static int forward_wakeup_use_mask = 1;
|
2004-09-03 07:42:31 +00:00
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, usemask, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_mask, 0,
|
|
|
|
"Use the mask of idle cpus");
|
|
|
|
|
|
|
|
static int forward_wakeup_use_loop = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, useloop, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_loop, 0,
|
|
|
|
"Use a loop to find idle cpus");
|
|
|
|
|
|
|
|
static int forward_wakeup_use_single = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, onecpu, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_single, 0,
|
|
|
|
"Only signal one idle cpu");
|
|
|
|
|
|
|
|
static int forward_wakeup_use_htt = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, htt2, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_htt, 0,
|
|
|
|
"account for htt");
|
2004-09-10 21:04:38 +00:00
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#endif
|
2004-09-10 21:04:38 +00:00
|
|
|
static int sched_followon = 0;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, followon, CTLFLAG_RW,
|
|
|
|
&sched_followon, 0,
|
|
|
|
"allow threads to share a quantum");
|
|
|
|
|
|
|
|
static int sched_pfollowons = 0;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, pfollowons, CTLFLAG_RD,
|
|
|
|
&sched_pfollowons, 0,
|
|
|
|
"number of followons done to a different ksegrp");
|
|
|
|
|
|
|
|
static int sched_kgfollowons = 0;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, kgfollowons, CTLFLAG_RD,
|
|
|
|
&sched_kgfollowons, 0,
|
|
|
|
"number of followons done in a ksegrp");
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
static __inline void
|
|
|
|
sched_load_add(void)
|
|
|
|
{
|
|
|
|
sched_tdcnt++;
|
|
|
|
CTR1(KTR_SCHED, "global load: %d", sched_tdcnt);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __inline void
|
|
|
|
sched_load_rem(void)
|
|
|
|
{
|
|
|
|
sched_tdcnt--;
|
|
|
|
CTR1(KTR_SCHED, "global load: %d", sched_tdcnt);
|
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Arrange to reschedule if necessary, taking the priorities and
|
|
|
|
* schedulers into account.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
maybe_resched(struct thread *td)
|
|
|
|
{
|
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-09-05 02:09:54 +00:00
|
|
|
if (td->td_priority < curthread->td_priority)
|
2003-02-17 09:55:10 +00:00
|
|
|
curthread->td_flags |= TDF_NEEDRESCHED;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force switch among equal priority processes every 100ms.
|
|
|
|
* We don't actually need to force a context switch of the current process.
|
|
|
|
* The act of firing the event triggers a context switch to softclock() and
|
|
|
|
* then switching back out again which is equivalent to a preemption, thus
|
|
|
|
* no further work is needed on the local CPU.
|
|
|
|
*/
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
|
|
|
roundrobin(void *arg)
|
|
|
|
{
|
|
|
|
|
|
|
|
#ifdef SMP
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
forward_roundrobin();
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
callout_reset(&roundrobin_callout, sched_quantum, roundrobin, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Constants for digital decay and forget:
|
2003-08-15 21:29:06 +00:00
|
|
|
* 90% of (kg_estcpu) usage in 5 * loadav time
|
|
|
|
* 95% of (ke_pctcpu) usage in 60 seconds (load insensitive)
|
2002-10-12 05:32:24 +00:00
|
|
|
* Note that, as ps(1) mentions, this can let percentages
|
|
|
|
* total over 100% (I've seen 137.9% for 3 processes).
|
|
|
|
*
|
2003-08-15 21:29:06 +00:00
|
|
|
* Note that schedclock() updates kg_estcpu and p_cpticks asynchronously.
|
2002-10-12 05:32:24 +00:00
|
|
|
*
|
2003-08-15 21:29:06 +00:00
|
|
|
* We wish to decay away 90% of kg_estcpu in (5 * loadavg) seconds.
|
2002-10-12 05:32:24 +00:00
|
|
|
* That is, the system wants to compute a value of decay such
|
|
|
|
* that the following for loop:
|
|
|
|
* for (i = 0; i < (5 * loadavg); i++)
|
2003-08-15 21:29:06 +00:00
|
|
|
* kg_estcpu *= decay;
|
2002-10-12 05:32:24 +00:00
|
|
|
* will compute
|
2003-08-15 21:29:06 +00:00
|
|
|
* kg_estcpu *= 0.1;
|
2002-10-12 05:32:24 +00:00
|
|
|
* for all values of loadavg:
|
|
|
|
*
|
|
|
|
* Mathematically this loop can be expressed by saying:
|
|
|
|
* decay ** (5 * loadavg) ~= .1
|
|
|
|
*
|
|
|
|
* The system computes decay as:
|
|
|
|
* decay = (2 * loadavg) / (2 * loadavg + 1)
|
|
|
|
*
|
|
|
|
* We wish to prove that the system's computation of decay
|
|
|
|
* will always fulfill the equation:
|
|
|
|
* decay ** (5 * loadavg) ~= .1
|
|
|
|
*
|
|
|
|
* If we compute b as:
|
|
|
|
* b = 2 * loadavg
|
|
|
|
* then
|
|
|
|
* decay = b / (b + 1)
|
|
|
|
*
|
|
|
|
* We now need to prove two things:
|
|
|
|
* 1) Given factor ** (5 * loadavg) ~= .1, prove factor == b/(b+1)
|
|
|
|
* 2) Given b/(b+1) ** power ~= .1, prove power == (5 * loadavg)
|
|
|
|
*
|
|
|
|
* Facts:
|
|
|
|
* For x close to zero, exp(x) =~ 1 + x, since
|
|
|
|
* exp(x) = 0! + x**1/1! + x**2/2! + ... .
|
|
|
|
* therefore exp(-1/b) =~ 1 - (1/b) = (b-1)/b.
|
|
|
|
* For x close to zero, ln(1+x) =~ x, since
|
|
|
|
* ln(1+x) = x - x**2/2 + x**3/3 - ... -1 < x < 1
|
|
|
|
* therefore ln(b/(b+1)) = ln(1 - 1/(b+1)) =~ -1/(b+1).
|
|
|
|
* ln(.1) =~ -2.30
|
|
|
|
*
|
|
|
|
* Proof of (1):
|
|
|
|
* Solve (factor)**(power) =~ .1 given power (5*loadav):
|
|
|
|
* solving for factor,
|
|
|
|
* ln(factor) =~ (-2.30/5*loadav), or
|
|
|
|
* factor =~ exp(-1/((5/2.30)*loadav)) =~ exp(-1/(2*loadav)) =
|
|
|
|
* exp(-1/b) =~ (b-1)/b =~ b/(b+1). QED
|
|
|
|
*
|
|
|
|
* Proof of (2):
|
|
|
|
* Solve (factor)**(power) =~ .1 given factor == (b/(b+1)):
|
|
|
|
* solving for power,
|
|
|
|
* power*ln(b/(b+1)) =~ -2.30, or
|
|
|
|
* power =~ 2.3 * (b + 1) = 4.6*loadav + 2.3 =~ 5*loadav. QED
|
|
|
|
*
|
|
|
|
* Actual power values for the implemented algorithm are as follows:
|
|
|
|
* loadav: 1 2 3 4
|
|
|
|
* power: 5.68 10.32 14.94 19.55
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* calculations for digital decay to forget 90% of usage in 5*loadav sec */
|
|
|
|
#define loadfactor(loadav) (2 * (loadav))
|
|
|
|
#define decay_cpu(loadfac, cpu) (((loadfac) * (cpu)) / ((loadfac) + FSCALE))
|
|
|
|
|
2003-08-15 21:29:06 +00:00
|
|
|
/* decay 95% of `ke_pctcpu' in 60 seconds; see CCPU_SHIFT before changing */
|
2002-10-12 05:32:24 +00:00
|
|
|
static fixpt_t ccpu = 0.95122942450071400909 * FSCALE; /* exp(-1/20) */
|
2006-04-27 17:57:59 +00:00
|
|
|
SYSCTL_INT(_kern, OID_AUTO, ccpu, CTLFLAG_RD, &ccpu, 0, "");
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If `ccpu' is not equal to `exp(-1/20)' and you still want to use the
|
|
|
|
* faster/more-accurate formula, you'll have to estimate CCPU_SHIFT below
|
|
|
|
* and possibly adjust FSHIFT in "param.h" so that (FSHIFT >= CCPU_SHIFT).
|
|
|
|
*
|
|
|
|
* To estimate CCPU_SHIFT for exp(-1/20), the following formula was used:
|
|
|
|
* 1 - exp(-1/20) ~= 0.0487 ~= 0.0488 == 1 (fixed pt, *11* bits).
|
|
|
|
*
|
|
|
|
* If you don't want to bother with the faster/more-accurate formula, you
|
|
|
|
* can set CCPU_SHIFT to (FSHIFT + 1) which will use a slower/less-accurate
|
|
|
|
* (more general) method of calculating the %age of CPU used by a process.
|
|
|
|
*/
|
|
|
|
#define CCPU_SHIFT 11
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recompute process priorities, every hz ticks.
|
|
|
|
* MP-safe, called without the Giant mutex.
|
|
|
|
*/
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
2003-12-26 17:07:29 +00:00
|
|
|
schedcpu(void)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
|
|
|
register fixpt_t loadfac = loadfactor(averunnable.ldavg[0]);
|
|
|
|
struct thread *td;
|
|
|
|
struct proc *p;
|
|
|
|
struct kse *ke;
|
|
|
|
struct ksegrp *kg;
|
2003-08-15 21:29:06 +00:00
|
|
|
int awake, realstathz;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
realstathz = stathz ? stathz : hz;
|
|
|
|
sx_slock(&allproc_lock);
|
|
|
|
FOREACH_PROC_IN_SYSTEM(p) {
|
2003-08-15 21:29:06 +00:00
|
|
|
/*
|
|
|
|
* Prevent state changes and protect run queue.
|
|
|
|
*/
|
2002-10-12 05:32:24 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2003-08-15 21:29:06 +00:00
|
|
|
/*
|
|
|
|
* Increment time in/out of memory. We ignore overflow; with
|
|
|
|
* 16-bit int's (remember them?) overflow takes 45 days.
|
|
|
|
*/
|
2002-10-12 05:32:24 +00:00
|
|
|
p->p_swtime++;
|
|
|
|
FOREACH_KSEGRP_IN_PROC(p, kg) {
|
|
|
|
awake = 0;
|
2004-09-05 02:09:54 +00:00
|
|
|
FOREACH_THREAD_IN_GROUP(kg, td) {
|
|
|
|
ke = td->td_kse;
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
2003-08-15 21:29:06 +00:00
|
|
|
* Increment sleep time (if sleeping). We
|
|
|
|
* ignore overflow, as above.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* The kse slptimes are not touched in wakeup
|
|
|
|
* because the thread may not HAVE a KSE.
|
|
|
|
*/
|
|
|
|
if (ke->ke_state == KES_ONRUNQ) {
|
|
|
|
awake = 1;
|
|
|
|
ke->ke_flags &= ~KEF_DIDRUN;
|
|
|
|
} else if ((ke->ke_state == KES_THREAD) &&
|
2004-09-05 02:09:54 +00:00
|
|
|
(TD_IS_RUNNING(td))) {
|
2002-10-12 05:32:24 +00:00
|
|
|
awake = 1;
|
|
|
|
/* Do not clear KEF_DIDRUN */
|
|
|
|
} else if (ke->ke_flags & KEF_DIDRUN) {
|
|
|
|
awake = 1;
|
|
|
|
ke->ke_flags &= ~KEF_DIDRUN;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2003-08-15 21:29:06 +00:00
|
|
|
* ke_pctcpu is only for ps and ttyinfo().
|
|
|
|
* Do it per kse, and add them up at the end?
|
2002-10-12 05:32:24 +00:00
|
|
|
* XXXKSE
|
|
|
|
*/
|
2003-08-15 21:29:06 +00:00
|
|
|
ke->ke_pctcpu = (ke->ke_pctcpu * ccpu) >>
|
2003-01-12 19:04:49 +00:00
|
|
|
FSHIFT;
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* If the kse has been idle the entire second,
|
|
|
|
* stop recalculating its priority until
|
|
|
|
* it wakes up.
|
|
|
|
*/
|
2004-08-22 05:21:41 +00:00
|
|
|
if (ke->ke_cpticks == 0)
|
2002-10-12 05:32:24 +00:00
|
|
|
continue;
|
|
|
|
#if (FSHIFT >= CCPU_SHIFT)
|
2003-01-13 03:42:41 +00:00
|
|
|
ke->ke_pctcpu += (realstathz == 100)
|
2004-08-22 05:21:41 +00:00
|
|
|
? ((fixpt_t) ke->ke_cpticks) <<
|
2002-10-12 05:32:24 +00:00
|
|
|
(FSHIFT - CCPU_SHIFT) :
|
2004-08-22 05:21:41 +00:00
|
|
|
100 * (((fixpt_t) ke->ke_cpticks)
|
2003-01-12 19:04:49 +00:00
|
|
|
<< (FSHIFT - CCPU_SHIFT)) / realstathz;
|
2002-10-12 05:32:24 +00:00
|
|
|
#else
|
2003-01-13 03:42:41 +00:00
|
|
|
ke->ke_pctcpu += ((FSCALE - ccpu) *
|
2004-08-22 05:21:41 +00:00
|
|
|
(ke->ke_cpticks *
|
2003-01-12 19:04:49 +00:00
|
|
|
FSCALE / realstathz)) >> FSHIFT;
|
2002-10-12 05:32:24 +00:00
|
|
|
#endif
|
2004-08-22 05:21:41 +00:00
|
|
|
ke->ke_cpticks = 0;
|
2002-10-12 05:32:24 +00:00
|
|
|
} /* end of kse loop */
|
|
|
|
/*
|
|
|
|
* If there are ANY running threads in this KSEGRP,
|
|
|
|
* then don't count it as sleeping.
|
|
|
|
*/
|
|
|
|
if (awake) {
|
|
|
|
if (kg->kg_slptime > 1) {
|
|
|
|
/*
|
|
|
|
* In an ideal world, this should not
|
|
|
|
* happen, because whoever woke us
|
|
|
|
* up from the long sleep should have
|
|
|
|
* unwound the slptime and reset our
|
|
|
|
* priority before we run at the stale
|
|
|
|
* priority. Should KASSERT at some
|
|
|
|
* point when all the cases are fixed.
|
|
|
|
*/
|
|
|
|
updatepri(kg);
|
|
|
|
}
|
|
|
|
kg->kg_slptime = 0;
|
2003-08-15 21:29:06 +00:00
|
|
|
} else
|
2002-10-12 05:32:24 +00:00
|
|
|
kg->kg_slptime++;
|
|
|
|
if (kg->kg_slptime > 1)
|
|
|
|
continue;
|
|
|
|
kg->kg_estcpu = decay_cpu(loadfac, kg->kg_estcpu);
|
|
|
|
resetpriority(kg);
|
|
|
|
FOREACH_THREAD_IN_GROUP(kg, td) {
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
resetpriority_thread(td, kg);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
} /* end of ksegrp loop */
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
} /* end of process loop */
|
|
|
|
sx_sunlock(&allproc_lock);
|
2003-12-26 17:07:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Main loop for a kthread that executes schedcpu once a second.
|
|
|
|
*/
|
|
|
|
static void
|
2004-01-25 08:00:04 +00:00
|
|
|
schedcpu_thread(void)
|
2003-12-26 17:07:29 +00:00
|
|
|
{
|
|
|
|
int nowake;
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
schedcpu();
|
2006-04-17 18:20:38 +00:00
|
|
|
tsleep(&nowake, 0, "-", hz);
|
2003-12-26 17:07:29 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recalculate the priority of a process after it has slept for a while.
|
2003-08-15 21:29:06 +00:00
|
|
|
* For all load averages >= 1 and max kg_estcpu of 255, sleeping for at
|
|
|
|
* least six times the loadfactor will decay kg_estcpu to zero.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
updatepri(struct ksegrp *kg)
|
|
|
|
{
|
2003-08-15 21:29:06 +00:00
|
|
|
register fixpt_t loadfac;
|
2002-10-12 05:32:24 +00:00
|
|
|
register unsigned int newcpu;
|
|
|
|
|
2003-08-15 21:29:06 +00:00
|
|
|
loadfac = loadfactor(averunnable.ldavg[0]);
|
2002-10-12 05:32:24 +00:00
|
|
|
if (kg->kg_slptime > 5 * loadfac)
|
|
|
|
kg->kg_estcpu = 0;
|
|
|
|
else {
|
2003-08-15 21:29:06 +00:00
|
|
|
newcpu = kg->kg_estcpu;
|
|
|
|
kg->kg_slptime--; /* was incremented in schedcpu() */
|
2002-10-12 05:32:24 +00:00
|
|
|
while (newcpu && --kg->kg_slptime)
|
|
|
|
newcpu = decay_cpu(loadfac, newcpu);
|
|
|
|
kg->kg_estcpu = newcpu;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute the priority of a process when running in user mode.
|
|
|
|
* Arrange to reschedule if the resulting priority is better
|
|
|
|
* than that of the current process.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
resetpriority(struct ksegrp *kg)
|
|
|
|
{
|
|
|
|
register unsigned int newpriority;
|
|
|
|
|
|
|
|
if (kg->kg_pri_class == PRI_TIMESHARE) {
|
|
|
|
newpriority = PUSER + kg->kg_estcpu / INVERSE_ESTCPU_WEIGHT +
|
2004-06-16 00:26:31 +00:00
|
|
|
NICE_WEIGHT * (kg->kg_proc->p_nice - PRIO_MIN);
|
2002-10-12 05:32:24 +00:00
|
|
|
newpriority = min(max(newpriority, PRI_MIN_TIMESHARE),
|
|
|
|
PRI_MAX_TIMESHARE);
|
|
|
|
kg->kg_user_pri = newpriority;
|
|
|
|
}
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update the thread's priority when the associated ksegroup's user
|
|
|
|
* priority changes.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
resetpriority_thread(struct thread *td, struct ksegrp *kg)
|
|
|
|
{
|
|
|
|
|
|
|
|
/* Only change threads with a time sharing user priority. */
|
|
|
|
if (td->td_priority < PRI_MIN_TIMESHARE ||
|
|
|
|
td->td_priority > PRI_MAX_TIMESHARE)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* XXX the whole needresched thing is broken, but not silly. */
|
|
|
|
maybe_resched(td);
|
|
|
|
|
|
|
|
sched_prio(td, kg->kg_user_pri);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
|
|
|
sched_setup(void *dummy)
|
|
|
|
{
|
2004-01-25 08:00:04 +00:00
|
|
|
setup_runqs();
|
2003-08-15 21:29:06 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
if (sched_quantum == 0)
|
|
|
|
sched_quantum = SCHED_QUANTUM;
|
|
|
|
hogticks = 2 * sched_quantum;
|
|
|
|
|
2004-03-05 19:27:04 +00:00
|
|
|
callout_init(&roundrobin_callout, CALLOUT_MPSAFE);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
/* Kick off timeout driven events by calling first time. */
|
|
|
|
roundrobin(NULL);
|
2004-02-01 02:46:47 +00:00
|
|
|
|
|
|
|
/* Account for thread0. */
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* External interfaces start here */
|
2004-09-05 02:09:54 +00:00
|
|
|
/*
|
|
|
|
* Very early in the boot some setup of scheduler-specific
|
2005-04-15 14:01:43 +00:00
|
|
|
* parts of proc0 and of some scheduler resources needs to be done.
|
2004-09-05 02:09:54 +00:00
|
|
|
* Called from:
|
|
|
|
* proc0_init()
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
schedinit(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Set up the scheduler specific parts of proc0.
|
|
|
|
*/
|
|
|
|
proc0.p_sched = NULL; /* XXX */
|
|
|
|
ksegrp0.kg_sched = &kg_sched0;
|
|
|
|
thread0.td_sched = &kse0;
|
|
|
|
kse0.ke_thread = &thread0;
|
|
|
|
kse0.ke_state = KES_THREAD;
|
|
|
|
kg_sched0.skg_concurrency = 1;
|
|
|
|
kg_sched0.skg_avail_opennings = 0; /* we are already running */
|
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
int
|
|
|
|
sched_runnable(void)
|
|
|
|
{
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
|
|
|
return runq_check(&runq) + runq_check(&runq_pcpu[PCPU_GET(cpuid)]);
|
|
|
|
#else
|
|
|
|
return runq_check(&runq);
|
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
sched_rr_interval(void)
|
|
|
|
{
|
|
|
|
if (sched_quantum == 0)
|
|
|
|
sched_quantum = SCHED_QUANTUM;
|
|
|
|
return (sched_quantum);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We adjust the priority of the current process. The priority of
|
|
|
|
* a process gets worse as it accumulates CPU time. The cpu usage
|
2003-08-15 21:29:06 +00:00
|
|
|
* estimator (kg_estcpu) is increased here. resetpriority() will
|
|
|
|
* compute a different priority each time kg_estcpu increases by
|
2002-10-12 05:32:24 +00:00
|
|
|
* INVERSE_ESTCPU_WEIGHT
|
|
|
|
* (until MAXPRI is reached). The cpu usage estimator ramps up
|
|
|
|
* quite quickly when the process is running (linearly), and decays
|
|
|
|
* away exponentially, at a rate which is proportionally slower when
|
|
|
|
* the system is busy. The basic principle is that the system will
|
|
|
|
* 90% forget that the process used a lot of CPU time in 5 * loadav
|
|
|
|
* seconds. This causes the system to favor processes which haven't
|
|
|
|
* run much recently, and to round-robin among other processes.
|
|
|
|
*/
|
|
|
|
void
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_clock(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
|
|
|
struct ksegrp *kg;
|
2003-10-16 08:39:15 +00:00
|
|
|
struct kse *ke;
|
2003-04-11 03:39:48 +00:00
|
|
|
|
2003-04-23 18:51:05 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2003-10-16 08:39:15 +00:00
|
|
|
kg = td->td_ksegrp;
|
|
|
|
ke = td->td_kse;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-08-22 05:21:41 +00:00
|
|
|
ke->ke_cpticks++;
|
2002-10-12 05:32:24 +00:00
|
|
|
kg->kg_estcpu = ESTCPULIM(kg->kg_estcpu + 1);
|
|
|
|
if ((kg->kg_estcpu % INVERSE_ESTCPU_WEIGHT) == 0) {
|
|
|
|
resetpriority(kg);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
resetpriority_thread(td, kg);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
}
|
2003-08-15 21:29:06 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* charge childs scheduling cpu usage to parent.
|
|
|
|
*
|
|
|
|
* XXXKSE assume only one thread & kse & ksegrp keep estcpu in each ksegrp.
|
|
|
|
* Charge it to the ksegrp that did the wait since process estcpu is sum of
|
|
|
|
* all ksegrps, this is strictly as expected. Assume that the child process
|
|
|
|
* aggregated all the estcpu into the 'built-in' ksegrp.
|
|
|
|
*/
|
|
|
|
void
|
2004-07-18 23:36:13 +00:00
|
|
|
sched_exit(struct proc *p, struct thread *td)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2004-07-18 23:36:13 +00:00
|
|
|
sched_exit_ksegrp(FIRST_KSEGRP_IN_PROC(p), td);
|
|
|
|
sched_exit_thread(FIRST_THREAD_IN_PROC(p), td);
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2004-07-18 23:36:13 +00:00
|
|
|
sched_exit_ksegrp(struct ksegrp *kg, struct thread *childtd)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2003-04-23 18:51:05 +00:00
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-07-18 23:36:13 +00:00
|
|
|
kg->kg_estcpu = ESTCPULIM(kg->kg_estcpu + childtd->td_ksegrp->kg_estcpu);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2003-04-11 03:39:48 +00:00
|
|
|
sched_exit_thread(struct thread *td, struct thread *child)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2004-12-26 00:16:24 +00:00
|
|
|
CTR3(KTR_SCHED, "sched_exit_thread: %p(%s) prio %d",
|
|
|
|
child, child->td_proc->p_comm, child->td_priority);
|
2004-04-05 15:06:01 +00:00
|
|
|
if ((child->td_proc->p_flag & P_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork(struct thread *td, struct thread *childtd)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork_ksegrp(td, childtd->td_ksegrp);
|
|
|
|
sched_fork_thread(td, childtd);
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2004-07-18 23:36:13 +00:00
|
|
|
sched_fork_ksegrp(struct thread *td, struct ksegrp *child)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2003-04-23 18:51:05 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-07-18 23:36:13 +00:00
|
|
|
child->kg_estcpu = td->td_ksegrp->kg_estcpu;
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork_thread(struct thread *td, struct thread *childtd)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_newthread(childtd);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2004-06-16 00:26:31 +00:00
|
|
|
sched_nice(struct proc *p, int nice)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2004-06-16 00:26:31 +00:00
|
|
|
struct ksegrp *kg;
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
struct thread *td;
|
2003-04-22 20:50:38 +00:00
|
|
|
|
2004-06-16 00:26:31 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2003-04-22 20:50:38 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-06-16 00:26:31 +00:00
|
|
|
p->p_nice = nice;
|
|
|
|
FOREACH_KSEGRP_IN_PROC(p, kg) {
|
|
|
|
resetpriority(kg);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
FOREACH_THREAD_IN_GROUP(kg, td) {
|
|
|
|
resetpriority_thread(td, kg);
|
|
|
|
}
|
2004-06-16 00:26:31 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
|
|
|
sched_class(struct ksegrp *kg, int class)
|
|
|
|
{
|
2003-04-23 18:51:05 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2003-04-11 03:39:48 +00:00
|
|
|
kg->kg_pri_class = class;
|
|
|
|
}
|
|
|
|
|
2002-10-14 20:34:31 +00:00
|
|
|
/*
|
|
|
|
* Adjust the priority of a thread.
|
|
|
|
* This may include moving the thread within the KSEGRP,
|
|
|
|
* changing the assignment of a kse to the thread,
|
|
|
|
* and moving a KSE in the system run queue.
|
|
|
|
*/
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void
|
|
|
|
sched_priority(struct thread *td, u_char prio)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2004-12-26 00:16:24 +00:00
|
|
|
CTR6(KTR_SCHED, "sched_prio: %p(%s) prio %d newprio %d by %p(%s)",
|
|
|
|
td, td->td_proc->p_comm, td->td_priority, prio, curthread,
|
|
|
|
curthread->td_proc->p_comm);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2003-04-23 18:51:05 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
if (td->td_priority == prio)
|
|
|
|
return;
|
2002-10-12 05:32:24 +00:00
|
|
|
if (TD_ON_RUNQ(td)) {
|
2002-10-14 20:34:31 +00:00
|
|
|
adjustrunqueue(td, prio);
|
|
|
|
} else {
|
|
|
|
td->td_priority = prio;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
/*
|
|
|
|
* Update a thread's priority when it is lent another thread's
|
|
|
|
* priority.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_lend_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
|
|
|
|
td->td_flags |= TDF_BORROWING;
|
|
|
|
sched_priority(td, prio);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore a thread's priority when priority propagation is
|
|
|
|
* over. The prio argument is the minimum priority the thread
|
|
|
|
* needs to have to satisfy other possible priority lending
|
|
|
|
* requests. If the thread's regulary priority is less
|
|
|
|
* important than prio the thread will keep a priority boost
|
|
|
|
* of prio.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_unlend_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
u_char base_pri;
|
|
|
|
|
|
|
|
if (td->td_base_pri >= PRI_MIN_TIMESHARE &&
|
|
|
|
td->td_base_pri <= PRI_MAX_TIMESHARE)
|
|
|
|
base_pri = td->td_ksegrp->kg_user_pri;
|
|
|
|
else
|
|
|
|
base_pri = td->td_base_pri;
|
|
|
|
if (prio >= base_pri) {
|
|
|
|
td->td_flags &= ~TDF_BORROWING;
|
|
|
|
sched_prio(td, base_pri);
|
|
|
|
} else
|
|
|
|
sched_lend_prio(td, prio);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
u_char oldprio;
|
|
|
|
|
|
|
|
/* First, update the base priority. */
|
|
|
|
td->td_base_pri = prio;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the thread is borrowing another thread's priority, don't ever
|
|
|
|
* lower the priority.
|
|
|
|
*/
|
|
|
|
if (td->td_flags & TDF_BORROWING && td->td_priority < prio)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Change the real priority. */
|
|
|
|
oldprio = td->td_priority;
|
|
|
|
sched_priority(td, prio);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the thread is on a turnstile, then let the turnstile update
|
|
|
|
* its state.
|
|
|
|
*/
|
|
|
|
if (TD_ON_LOCK(td) && oldprio != prio)
|
|
|
|
turnstile_adjust(td, oldprio);
|
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
Switch the sleep/wakeup and condition variable implementations to use the
sleep queue interface:
- Sleep queues attempt to merge some of the benefits of both sleep queues
and condition variables. Having sleep qeueus in a hash table avoids
having to allocate a queue head for each wait channel. Thus, struct cv
has shrunk down to just a single char * pointer now. However, the
hash table does not hold threads directly, but queue heads. This means
that once you have located a queue in the hash bucket, you no longer have
to walk the rest of the hash chain looking for threads. Instead, you have
a list of all the threads sleeping on that wait channel.
- Outside of the sleepq code and the sleep/cv code the kernel no longer
differentiates between cv's and sleep/wakeup. For example, calls to
abortsleep() and cv_abort() are replaced with a call to sleepq_abort().
Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and
cv_waitq_remove() have been replaced with calls to sleepq_remove().
- The sched_sleep() function no longer accepts a priority argument as
sleep's no longer inherently bump the priority. Instead, this is soley
a propery of msleep() which explicitly calls sched_prio() before
blocking.
- The TDF_ONSLEEPQ flag has been dropped as it was never used. The
associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been
dropped and replaced with a single explicit clearing of td_wchan.
TD_SET_ONSLEEPQ() would really have only made sense if it had taken
the wait channel and message as arguments anyway. Now that that only
happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
|
|
|
sched_sleep(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2003-04-23 18:51:05 +00:00
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2002-10-12 05:32:24 +00:00
|
|
|
td->td_ksegrp->kg_slptime = 0;
|
|
|
|
}
|
|
|
|
|
2004-09-10 21:04:38 +00:00
|
|
|
static void remrunqueue(struct thread *td);
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2004-09-10 21:04:38 +00:00
|
|
|
sched_switch(struct thread *td, struct thread *newtd, int flags)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
|
|
|
struct kse *ke;
|
2004-09-10 21:04:38 +00:00
|
|
|
struct ksegrp *kg;
|
2002-10-12 05:32:24 +00:00
|
|
|
struct proc *p;
|
|
|
|
|
|
|
|
ke = td->td_kse;
|
|
|
|
p = td->td_proc;
|
|
|
|
|
2003-04-23 18:51:05 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-02-01 06:20:18 +00:00
|
|
|
if ((p->p_flag & P_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2004-09-10 21:04:38 +00:00
|
|
|
/*
|
|
|
|
* We are volunteering to switch out so we get to nominate
|
|
|
|
* a successor for the rest of our quantum
|
|
|
|
* First try another thread in our ksegrp, and then look for
|
|
|
|
* other ksegrps in our process.
|
|
|
|
*/
|
|
|
|
if (sched_followon &&
|
|
|
|
(p->p_flag & P_HADTHREADS) &&
|
|
|
|
(flags & SW_VOL) &&
|
|
|
|
newtd == NULL) {
|
|
|
|
/* lets schedule another thread from this process */
|
|
|
|
kg = td->td_ksegrp;
|
|
|
|
if ((newtd = TAILQ_FIRST(&kg->kg_runq))) {
|
|
|
|
remrunqueue(newtd);
|
|
|
|
sched_kgfollowons++;
|
|
|
|
} else {
|
|
|
|
FOREACH_KSEGRP_IN_PROC(p, kg) {
|
|
|
|
if ((newtd = TAILQ_FIRST(&kg->kg_runq))) {
|
|
|
|
sched_pfollowons++;
|
|
|
|
remrunqueue(newtd);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2004-12-07 18:17:24 +00:00
|
|
|
if (newtd)
|
|
|
|
newtd->td_flags |= (td->td_flags & TDF_NEEDRESCHED);
|
|
|
|
|
2003-04-10 17:35:44 +00:00
|
|
|
td->td_lastcpu = td->td_oncpu;
|
2004-07-16 21:04:55 +00:00
|
|
|
td->td_flags &= ~TDF_NEEDRESCHED;
|
2005-04-08 03:37:53 +00:00
|
|
|
td->td_owepreempt = 0;
|
2004-02-01 02:46:47 +00:00
|
|
|
td->td_oncpu = NOCPU;
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* At the last moment, if this thread is still marked RUNNING,
|
|
|
|
* then put it back on the run queue as it has not been suspended
|
2004-07-02 19:09:50 +00:00
|
|
|
* or stopped or any thing else similar. We never put the idle
|
|
|
|
* threads on the run queue, however.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
2004-07-02 19:09:50 +00:00
|
|
|
if (td == PCPU_GET(idlethread))
|
|
|
|
TD_SET_CAN_RUN(td);
|
2004-09-05 02:09:54 +00:00
|
|
|
else {
|
2004-10-05 22:03:10 +00:00
|
|
|
SLOT_RELEASE(td->td_ksegrp);
|
2004-09-05 02:09:54 +00:00
|
|
|
if (TD_IS_RUNNING(td)) {
|
|
|
|
/* Put us back on the run queue (kse and all). */
|
2004-10-05 22:03:10 +00:00
|
|
|
setrunqueue(td, (flags & SW_PREEMPT) ?
|
|
|
|
SRQ_OURSELF|SRQ_YIELDING|SRQ_PREEMPTED :
|
|
|
|
SRQ_OURSELF|SRQ_YIELDING);
|
2004-09-05 02:09:54 +00:00
|
|
|
} else if (p->p_flag & P_HADTHREADS) {
|
|
|
|
/*
|
|
|
|
* We will not be on the run queue. So we must be
|
|
|
|
* sleeping or similar. As it's available,
|
|
|
|
* someone else can use the KSE if they need it.
|
2004-10-05 22:03:10 +00:00
|
|
|
* It's NOT available if we are about to need it
|
2004-09-05 02:09:54 +00:00
|
|
|
*/
|
2004-10-05 22:03:10 +00:00
|
|
|
if (newtd == NULL || newtd->td_ksegrp != td->td_ksegrp)
|
|
|
|
slot_fill(td->td_ksegrp);
|
2004-09-05 02:09:54 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2004-10-05 22:03:10 +00:00
|
|
|
if (newtd) {
|
|
|
|
/*
|
|
|
|
* The thread we are about to run needs to be counted
|
|
|
|
* as if it had been added to the run queue and selected.
|
|
|
|
* It came from:
|
|
|
|
* * A preemption
|
|
|
|
* * An upcall
|
|
|
|
* * A followon
|
|
|
|
*/
|
|
|
|
KASSERT((newtd->td_inhibitors == 0),
|
|
|
|
("trying to run inhibitted thread"));
|
|
|
|
SLOT_USE(newtd->td_ksegrp);
|
|
|
|
newtd->td_kse->ke_flags |= KEF_DIDRUN;
|
|
|
|
TD_SET_RUNNING(newtd);
|
|
|
|
if ((newtd->td_proc->p_flag & P_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2004-10-05 22:03:10 +00:00
|
|
|
} else {
|
2004-07-02 19:09:50 +00:00
|
|
|
newtd = choosethread();
|
2004-10-05 22:03:10 +00:00
|
|
|
}
|
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
if (td != newtd) {
|
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
if (PMC_PROC_IS_USING_PMCS(td->td_proc))
|
|
|
|
PMC_SWITCH_CONTEXT(td, PMC_FN_CSW_OUT);
|
|
|
|
#endif
|
2003-10-16 08:53:46 +00:00
|
|
|
cpu_switch(td, newtd);
|
2005-04-19 04:01:25 +00:00
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
if (PMC_PROC_IS_USING_PMCS(td->td_proc))
|
|
|
|
PMC_SWITCH_CONTEXT(td, PMC_FN_CSW_IN);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2003-10-16 08:53:46 +00:00
|
|
|
sched_lock.mtx_lock = (uintptr_t)td;
|
|
|
|
td->td_oncpu = PCPU_GET(cpuid);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_wakeup(struct thread *td)
|
|
|
|
{
|
|
|
|
struct ksegrp *kg;
|
|
|
|
|
2003-04-23 18:51:05 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2002-10-12 05:32:24 +00:00
|
|
|
kg = td->td_ksegrp;
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
if (kg->kg_slptime > 1) {
|
2002-10-12 05:32:24 +00:00
|
|
|
updatepri(kg);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
resetpriority(kg);
|
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
kg->kg_slptime = 0;
|
2004-09-01 02:11:28 +00:00
|
|
|
setrunqueue(td, SRQ_BORING);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
/* enable HTT_2 if you have a 2-way HTT cpu.*/
|
|
|
|
static int
|
|
|
|
forward_wakeup(int cpunum)
|
|
|
|
{
|
|
|
|
cpumask_t map, me, dontuse;
|
|
|
|
cpumask_t map2;
|
|
|
|
struct pcpu *pc;
|
|
|
|
cpumask_t id, map3;
|
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
CTR0(KTR_RUNQ, "forward_wakeup()");
|
2004-09-03 07:42:31 +00:00
|
|
|
|
|
|
|
if ((!forward_wakeup_enabled) ||
|
|
|
|
(forward_wakeup_use_mask == 0 && forward_wakeup_use_loop == 0))
|
|
|
|
return (0);
|
|
|
|
if (!smp_started || cold || panicstr)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
forward_wakeups_requested++;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* check the idle mask we received against what we calculated before
|
|
|
|
* in the old version.
|
|
|
|
*/
|
|
|
|
me = PCPU_GET(cpumask);
|
|
|
|
/*
|
|
|
|
* don't bother if we should be doing it ourself..
|
|
|
|
*/
|
|
|
|
if ((me & idle_cpus_mask) && (cpunum == NOCPU || me == (1 << cpunum)))
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
dontuse = me | stopped_cpus | hlt_cpus_mask;
|
|
|
|
map3 = 0;
|
|
|
|
if (forward_wakeup_use_loop) {
|
|
|
|
SLIST_FOREACH(pc, &cpuhead, pc_allcpu) {
|
|
|
|
id = pc->pc_cpumask;
|
|
|
|
if ( (id & dontuse) == 0 &&
|
|
|
|
pc->pc_curthread == pc->pc_idlethread) {
|
|
|
|
map3 |= id;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (forward_wakeup_use_mask) {
|
|
|
|
map = 0;
|
|
|
|
map = idle_cpus_mask & ~dontuse;
|
|
|
|
|
|
|
|
/* If they are both on, compare and use loop if different */
|
|
|
|
if (forward_wakeup_use_loop) {
|
|
|
|
if (map != map3) {
|
|
|
|
printf("map (%02X) != map3 (%02X)\n",
|
|
|
|
map, map3);
|
|
|
|
map = map3;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
map = map3;
|
|
|
|
}
|
|
|
|
/* If we only allow a specific CPU, then mask off all the others */
|
|
|
|
if (cpunum != NOCPU) {
|
|
|
|
KASSERT((cpunum <= mp_maxcpus),("forward_wakeup: bad cpunum."));
|
|
|
|
map &= (1 << cpunum);
|
|
|
|
} else {
|
|
|
|
/* Try choose an idle die. */
|
|
|
|
if (forward_wakeup_use_htt) {
|
|
|
|
map2 = (map & (map >> 1)) & 0x5555;
|
|
|
|
if (map2) {
|
|
|
|
map = map2;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* set only one bit */
|
|
|
|
if (forward_wakeup_use_single) {
|
|
|
|
map = map & ((~map) + 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (map) {
|
|
|
|
forward_wakeups_delivered++;
|
|
|
|
ipi_selected(map, IPI_AST);
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
if (cpunum == NOCPU)
|
|
|
|
printf("forward_wakeup: Idle processor not found\n");
|
|
|
|
return (0);
|
|
|
|
}
|
2004-09-03 09:15:10 +00:00
|
|
|
#endif
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2005-06-09 18:26:31 +00:00
|
|
|
#ifdef SMP
|
2005-06-09 19:43:08 +00:00
|
|
|
static void kick_other_cpu(int pri,int cpuid);
|
2005-06-09 18:26:31 +00:00
|
|
|
|
|
|
|
static void
|
|
|
|
kick_other_cpu(int pri,int cpuid)
|
|
|
|
{
|
|
|
|
struct pcpu * pcpu = pcpu_find(cpuid);
|
|
|
|
int cpri = pcpu->pc_curthread->td_priority;
|
|
|
|
|
|
|
|
if (idle_cpus_mask & pcpu->pc_cpumask) {
|
|
|
|
forward_wakeups_delivered++;
|
|
|
|
ipi_selected(pcpu->pc_cpumask, IPI_AST);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pri >= cpri)
|
|
|
|
return;
|
|
|
|
|
|
|
|
#if defined(IPI_PREEMPTION) && defined(PREEMPTION)
|
|
|
|
#if !defined(FULL_PREEMPTION)
|
|
|
|
if (pri <= PRI_MAX_ITHD)
|
|
|
|
#endif /* ! FULL_PREEMPTION */
|
|
|
|
{
|
|
|
|
ipi_selected(pcpu->pc_cpumask, IPI_PREEMPT);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* defined(IPI_PREEMPTION) && defined(PREEMPTION) */
|
|
|
|
|
|
|
|
pcpu->pc_curthread->td_flags |= TDF_NEEDRESCHED;
|
|
|
|
ipi_selected( pcpu->pc_cpumask , IPI_AST);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* SMP */
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2004-09-01 02:11:28 +00:00
|
|
|
sched_add(struct thread *td, int flags)
|
2005-06-09 18:26:31 +00:00
|
|
|
#ifdef SMP
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2003-10-16 08:39:15 +00:00
|
|
|
struct kse *ke;
|
2004-09-01 06:42:02 +00:00
|
|
|
int forwarded = 0;
|
|
|
|
int cpu;
|
2005-06-09 18:26:31 +00:00
|
|
|
int single_cpu = 0;
|
2003-10-16 08:39:15 +00:00
|
|
|
|
|
|
|
ke = td->td_kse;
|
2002-10-12 05:32:24 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
KASSERT(ke->ke_state != KES_ONRUNQ,
|
2004-01-25 08:21:46 +00:00
|
|
|
("sched_add: kse %p (%s) already in run queue", ke,
|
2002-10-12 05:32:24 +00:00
|
|
|
ke->ke_proc->p_comm));
|
|
|
|
KASSERT(ke->ke_proc->p_sflag & PS_INMEM,
|
2004-01-25 08:21:46 +00:00
|
|
|
("sched_add: process swapped out"));
|
2004-12-26 00:16:24 +00:00
|
|
|
CTR5(KTR_SCHED, "sched_add: %p(%s) prio %d by %p(%s)",
|
|
|
|
td, td->td_proc->p_comm, td->td_priority, curthread,
|
|
|
|
curthread->td_proc->p_comm);
|
2004-07-02 20:21:44 +00:00
|
|
|
|
2005-06-09 18:26:31 +00:00
|
|
|
|
|
|
|
if (td->td_pinned != 0) {
|
|
|
|
cpu = td->td_lastcpu;
|
|
|
|
ke->ke_runq = &runq_pcpu[cpu];
|
|
|
|
single_cpu = 1;
|
|
|
|
CTR3(KTR_RUNQ,
|
|
|
|
"sched_add: Put kse:%p(td:%p) on cpu%d runq", ke, td, cpu);
|
|
|
|
} else if ((ke)->ke_flags & KEF_BOUND) {
|
|
|
|
/* Find CPU from bound runq */
|
|
|
|
KASSERT(SKE_RUNQ_PCPU(ke),("sched_add: bound kse not on cpu runq"));
|
2005-06-09 19:43:08 +00:00
|
|
|
cpu = ke->ke_runq - &runq_pcpu[0];
|
2005-06-09 18:26:31 +00:00
|
|
|
single_cpu = 1;
|
|
|
|
CTR3(KTR_RUNQ,
|
|
|
|
"sched_add: Put kse:%p(td:%p) on cpu%d runq", ke, td, cpu);
|
|
|
|
} else {
|
2004-09-01 06:42:02 +00:00
|
|
|
CTR2(KTR_RUNQ,
|
|
|
|
"sched_add: adding kse:%p (td:%p) to gbl runq", ke, td);
|
|
|
|
cpu = NOCPU;
|
2004-01-25 08:00:04 +00:00
|
|
|
ke->ke_runq = &runq;
|
2005-06-09 18:26:31 +00:00
|
|
|
}
|
|
|
|
|
2005-06-09 19:43:08 +00:00
|
|
|
if (single_cpu && (cpu != PCPU_GET(cpuid))) {
|
2005-06-09 18:26:31 +00:00
|
|
|
kick_other_cpu(td->td_priority,cpu);
|
2004-01-25 08:00:04 +00:00
|
|
|
} else {
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2005-06-09 19:43:08 +00:00
|
|
|
if (!single_cpu) {
|
2005-06-09 18:26:31 +00:00
|
|
|
cpumask_t me = PCPU_GET(cpumask);
|
|
|
|
int idle = idle_cpus_mask & me;
|
|
|
|
|
2005-06-09 19:43:08 +00:00
|
|
|
if (!idle && ((flags & SRQ_INTR) == 0) &&
|
|
|
|
(idle_cpus_mask & ~(hlt_cpus_mask | me)))
|
2005-06-09 18:26:31 +00:00
|
|
|
forwarded = forward_wakeup(cpu);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!forwarded) {
|
2005-06-09 19:43:08 +00:00
|
|
|
if ((flags & SRQ_YIELDING) == 0 && maybe_preempt(td))
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
else
|
|
|
|
maybe_resched(td);
|
|
|
|
}
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
2005-06-09 18:26:31 +00:00
|
|
|
|
|
|
|
if ((td->td_proc->p_flag & P_NOLOAD) == 0)
|
|
|
|
sched_load_add();
|
|
|
|
SLOT_USE(td->td_ksegrp);
|
|
|
|
runq_add(ke->ke_runq, ke, flags);
|
|
|
|
ke->ke_state = KES_ONRUNQ;
|
|
|
|
}
|
|
|
|
#else /* SMP */
|
|
|
|
{
|
|
|
|
struct kse *ke;
|
|
|
|
ke = td->td_kse;
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
KASSERT(ke->ke_state != KES_ONRUNQ,
|
|
|
|
("sched_add: kse %p (%s) already in run queue", ke,
|
|
|
|
ke->ke_proc->p_comm));
|
|
|
|
KASSERT(ke->ke_proc->p_sflag & PS_INMEM,
|
|
|
|
("sched_add: process swapped out"));
|
|
|
|
CTR5(KTR_SCHED, "sched_add: %p(%s) prio %d by %p(%s)",
|
|
|
|
td, td->td_proc->p_comm, td->td_priority, curthread,
|
|
|
|
curthread->td_proc->p_comm);
|
2004-08-09 18:21:12 +00:00
|
|
|
CTR2(KTR_RUNQ, "sched_add: adding kse:%p (td:%p) to runq", ke, td);
|
2004-01-25 08:00:04 +00:00
|
|
|
ke->ke_runq = &runq;
|
2004-09-01 06:42:02 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we are yielding (on the way out anyhow)
|
|
|
|
* or the thread being saved is US,
|
|
|
|
* then don't try be smart about preemption
|
|
|
|
* or kicking off another CPU
|
|
|
|
* as it won't help and may hinder.
|
|
|
|
* In the YIEDLING case, we are about to run whoever is
|
|
|
|
* being put in the queue anyhow, and in the
|
|
|
|
* OURSELF case, we are puting ourself on the run queue
|
|
|
|
* which also only happens when we are about to yield.
|
|
|
|
*/
|
|
|
|
if((flags & SRQ_YIELDING) == 0) {
|
2005-06-09 18:26:31 +00:00
|
|
|
if (maybe_preempt(td))
|
|
|
|
return;
|
|
|
|
}
|
2004-02-01 06:20:18 +00:00
|
|
|
if ((td->td_proc->p_flag & P_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2004-10-05 22:03:10 +00:00
|
|
|
SLOT_USE(td->td_ksegrp);
|
|
|
|
runq_add(ke->ke_runq, ke, flags);
|
2004-08-11 20:54:48 +00:00
|
|
|
ke->ke_state = KES_ONRUNQ;
|
2004-07-13 20:49:13 +00:00
|
|
|
maybe_resched(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2005-06-09 18:26:31 +00:00
|
|
|
#endif /* SMP */
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_rem(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2003-10-16 08:39:15 +00:00
|
|
|
struct kse *ke;
|
|
|
|
|
|
|
|
ke = td->td_kse;
|
2002-10-12 05:32:24 +00:00
|
|
|
KASSERT(ke->ke_proc->p_sflag & PS_INMEM,
|
2004-01-25 08:21:46 +00:00
|
|
|
("sched_rem: process swapped out"));
|
|
|
|
KASSERT((ke->ke_state == KES_ONRUNQ),
|
|
|
|
("sched_rem: KSE not on run queue"));
|
2002-10-12 05:32:24 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-12-26 00:16:24 +00:00
|
|
|
CTR5(KTR_SCHED, "sched_rem: %p(%s) prio %d by %p(%s)",
|
|
|
|
td, td->td_proc->p_comm, td->td_priority, curthread,
|
|
|
|
curthread->td_proc->p_comm);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-02-01 06:20:18 +00:00
|
|
|
if ((td->td_proc->p_flag & P_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2004-10-05 21:10:44 +00:00
|
|
|
SLOT_RELEASE(td->td_ksegrp);
|
2004-08-22 05:21:41 +00:00
|
|
|
runq_remove(ke->ke_runq, ke);
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
ke->ke_state = KES_THREAD;
|
|
|
|
}
|
|
|
|
|
2004-09-16 07:12:59 +00:00
|
|
|
/*
|
|
|
|
* Select threads to run.
|
|
|
|
* Notice that the running threads still consume a slot.
|
|
|
|
*/
|
2002-10-12 05:32:24 +00:00
|
|
|
struct kse *
|
|
|
|
sched_choose(void)
|
|
|
|
{
|
|
|
|
struct kse *ke;
|
2004-01-25 08:00:04 +00:00
|
|
|
struct runq *rq;
|
|
|
|
|
|
|
|
#ifdef SMP
|
|
|
|
struct kse *kecpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
rq = &runq;
|
|
|
|
ke = runq_choose(&runq);
|
|
|
|
kecpu = runq_choose(&runq_pcpu[PCPU_GET(cpuid)]);
|
|
|
|
|
|
|
|
if (ke == NULL ||
|
|
|
|
(kecpu != NULL &&
|
|
|
|
kecpu->ke_thread->td_priority < ke->ke_thread->td_priority)) {
|
2004-08-09 18:21:12 +00:00
|
|
|
CTR2(KTR_RUNQ, "choosing kse %p from pcpu runq %d", kecpu,
|
2004-01-25 08:00:04 +00:00
|
|
|
PCPU_GET(cpuid));
|
|
|
|
ke = kecpu;
|
|
|
|
rq = &runq_pcpu[PCPU_GET(cpuid)];
|
|
|
|
} else {
|
2004-08-09 18:21:12 +00:00
|
|
|
CTR1(KTR_RUNQ, "choosing kse %p from main runq", ke);
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
rq = &runq;
|
2002-10-12 05:32:24 +00:00
|
|
|
ke = runq_choose(&runq);
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
if (ke != NULL) {
|
2004-01-25 08:00:04 +00:00
|
|
|
runq_remove(rq, ke);
|
2002-10-12 05:32:24 +00:00
|
|
|
ke->ke_state = KES_THREAD;
|
|
|
|
|
|
|
|
KASSERT(ke->ke_proc->p_sflag & PS_INMEM,
|
2004-01-25 08:21:46 +00:00
|
|
|
("sched_choose: process swapped out"));
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
return (ke);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_userret(struct thread *td)
|
|
|
|
{
|
|
|
|
struct ksegrp *kg;
|
|
|
|
/*
|
|
|
|
* XXX we cheat slightly on the locking here to avoid locking in
|
|
|
|
* the usual case. Setting td_priority here is essentially an
|
|
|
|
* incomplete workaround for not setting it properly elsewhere.
|
|
|
|
* Now that some interrupt handlers are threads, not setting it
|
|
|
|
* properly elsewhere can clobber it in the window between setting
|
|
|
|
* it here and returning to user mode, so don't waste time setting
|
|
|
|
* it perfectly here.
|
|
|
|
*/
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
KASSERT((td->td_flags & TDF_BORROWING) == 0,
|
|
|
|
("thread with borrowed priority returning to userland"));
|
2002-10-12 05:32:24 +00:00
|
|
|
kg = td->td_ksegrp;
|
|
|
|
if (td->td_priority != kg->kg_user_pri) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
td->td_priority = kg->kg_user_pri;
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
td->td_base_pri = kg->kg_user_pri;
|
2002-10-12 05:32:24 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
}
|
|
|
|
}
|
2002-11-21 01:22:38 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
void
|
|
|
|
sched_bind(struct thread *td, int cpu)
|
|
|
|
{
|
|
|
|
struct kse *ke;
|
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
KASSERT(TD_IS_RUNNING(td),
|
|
|
|
("sched_bind: cannot bind non-running thread"));
|
|
|
|
|
|
|
|
ke = td->td_kse;
|
|
|
|
|
|
|
|
ke->ke_flags |= KEF_BOUND;
|
|
|
|
#ifdef SMP
|
|
|
|
ke->ke_runq = &runq_pcpu[cpu];
|
|
|
|
if (PCPU_GET(cpuid) == cpu)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ke->ke_state = KES_THREAD;
|
|
|
|
|
2004-07-02 19:09:50 +00:00
|
|
|
mi_switch(SW_VOL, NULL);
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_unbind(struct thread* td)
|
|
|
|
{
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
td->td_kse->ke_flags &= ~KEF_BOUND;
|
|
|
|
}
|
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
int
|
|
|
|
sched_is_bound(struct thread *td)
|
|
|
|
{
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
return (td->td_kse->ke_flags & KEF_BOUND);
|
|
|
|
}
|
|
|
|
|
2006-06-15 06:37:39 +00:00
|
|
|
void
|
|
|
|
sched_relinquish(struct thread *td)
|
|
|
|
{
|
|
|
|
struct ksegrp *kg;
|
|
|
|
|
|
|
|
kg = td->td_ksegrp;
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
if (kg->kg_pri_class == PRI_TIMESHARE)
|
|
|
|
sched_prio(td, PRI_MAX_TIMESHARE);
|
|
|
|
mi_switch(SW_VOL, NULL);
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
}
|
|
|
|
|
2004-02-01 02:46:47 +00:00
|
|
|
int
|
|
|
|
sched_load(void)
|
|
|
|
{
|
|
|
|
return (sched_tdcnt);
|
|
|
|
}
|
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_ksegrp(void)
|
|
|
|
{
|
2004-09-05 02:09:54 +00:00
|
|
|
return (sizeof(struct ksegrp) + sizeof(struct kg_sched));
|
2002-11-21 01:22:38 +00:00
|
|
|
}
|
2006-06-15 06:37:39 +00:00
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_proc(void)
|
|
|
|
{
|
|
|
|
return (sizeof(struct proc));
|
|
|
|
}
|
2006-06-15 06:37:39 +00:00
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_thread(void)
|
|
|
|
{
|
2004-09-05 02:09:54 +00:00
|
|
|
return (sizeof(struct thread) + sizeof(struct kse));
|
2002-11-21 01:22:38 +00:00
|
|
|
}
|
2002-11-21 09:30:55 +00:00
|
|
|
|
|
|
|
fixpt_t
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_pctcpu(struct thread *td)
|
2002-11-21 09:30:55 +00:00
|
|
|
{
|
2003-10-16 21:13:14 +00:00
|
|
|
struct kse *ke;
|
|
|
|
|
|
|
|
ke = td->td_kse;
|
2004-09-05 02:09:54 +00:00
|
|
|
return (ke->ke_pctcpu);
|
2002-11-21 09:30:55 +00:00
|
|
|
}
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
|
|
|
|
void
|
|
|
|
sched_tick(void)
|
|
|
|
{
|
|
|
|
}
|
2004-09-05 02:09:54 +00:00
|
|
|
#define KERN_SWITCH_INCLUDE 1
|
|
|
|
#include "kern/kern_switch.c"
|