2002-10-12 05:32:24 +00:00
|
|
|
/*-
|
|
|
|
* Copyright (c) 1982, 1986, 1990, 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2005-06-24 00:16:57 +00:00
|
|
|
#include "opt_hwpmc_hooks.h"
|
2008-03-20 01:32:48 +00:00
|
|
|
#include "opt_sched.h"
|
2008-05-25 01:44:58 +00:00
|
|
|
#include "opt_kdtrace.h"
|
2005-06-24 00:16:57 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
2008-03-02 21:34:57 +00:00
|
|
|
#include <sys/cpuset.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/ktr.h>
|
|
|
|
#include <sys/lock.h>
|
2003-12-26 17:07:29 +00:00
|
|
|
#include <sys/kthread.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/mutex.h>
|
|
|
|
#include <sys/proc.h>
|
|
|
|
#include <sys/resourcevar.h>
|
|
|
|
#include <sys/sched.h>
|
2012-05-15 01:30:25 +00:00
|
|
|
#include <sys/sdt.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/smp.h>
|
|
|
|
#include <sys/sysctl.h>
|
|
|
|
#include <sys/sx.h>
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
#include <sys/turnstile.h>
|
2006-08-25 06:12:53 +00:00
|
|
|
#include <sys/umtx.h>
|
2006-06-29 19:37:31 +00:00
|
|
|
#include <machine/pcb.h>
|
2004-09-03 08:19:31 +00:00
|
|
|
#include <machine/smp.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
#include <sys/pmckern.h>
|
|
|
|
#endif
|
|
|
|
|
2008-05-25 01:44:58 +00:00
|
|
|
#ifdef KDTRACE_HOOKS
|
|
|
|
#include <sys/dtrace_bsd.h>
|
|
|
|
int dtrace_vtime_active;
|
|
|
|
dtrace_vtime_switch_func_t dtrace_vtime_switch_func;
|
|
|
|
#endif
|
|
|
|
|
2002-11-21 09:14:13 +00:00
|
|
|
/*
|
|
|
|
* INVERSE_ESTCPU_WEIGHT is only suitable for statclock() frequencies in
|
|
|
|
* the range 100-256 Hz (approximately).
|
|
|
|
*/
|
|
|
|
#define ESTCPULIM(e) \
|
|
|
|
min((e), INVERSE_ESTCPU_WEIGHT * (NICE_WEIGHT * (PRIO_MAX - PRIO_MIN) - \
|
|
|
|
RQ_PPQ) + INVERSE_ESTCPU_WEIGHT - 1)
|
2003-11-09 13:45:54 +00:00
|
|
|
#ifdef SMP
|
|
|
|
#define INVERSE_ESTCPU_WEIGHT (8 * smp_cpus)
|
|
|
|
#else
|
2002-11-21 09:14:13 +00:00
|
|
|
#define INVERSE_ESTCPU_WEIGHT 8 /* 1 / (priorities per estcpu level). */
|
2003-11-09 13:45:54 +00:00
|
|
|
#endif
|
2002-11-21 09:14:13 +00:00
|
|
|
#define NICE_WEIGHT 1 /* Priorities per nice level. */
|
|
|
|
|
2009-01-25 07:35:10 +00:00
|
|
|
#define TS_NAME_LEN (MAXCOMLEN + sizeof(" td ") + sizeof(__XSTRING(UINT_MAX)))
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* The schedulable entity that runs a context.
|
2006-12-06 06:34:57 +00:00
|
|
|
* This is an extension to the thread structure and is tailored to
|
|
|
|
* the requirements of this scheduler
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched {
|
|
|
|
fixpt_t ts_pctcpu; /* (j) %cpu during p_swtime. */
|
|
|
|
int ts_cpticks; /* (j) Ticks of cpu time. */
|
2007-09-21 04:10:23 +00:00
|
|
|
int ts_slptime; /* (j) Seconds !RUNNING. */
|
2012-08-09 18:09:59 +00:00
|
|
|
int ts_slice; /* Remaining part of time slice. */
|
2008-07-28 17:25:24 +00:00
|
|
|
int ts_flags;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct runq *ts_runq; /* runq the thread is currently on */
|
2009-01-17 07:17:57 +00:00
|
|
|
#ifdef KTR
|
|
|
|
char ts_name[TS_NAME_LEN];
|
|
|
|
#endif
|
2003-01-12 19:04:49 +00:00
|
|
|
};
|
2004-09-05 02:09:54 +00:00
|
|
|
|
|
|
|
/* flags kept in td_flags */
|
2006-12-06 06:34:57 +00:00
|
|
|
#define TDF_DIDRUN TDF_SCHED0 /* thread actually ran. */
|
2008-03-20 05:51:16 +00:00
|
|
|
#define TDF_BOUND TDF_SCHED1 /* Bound to one CPU. */
|
2012-08-09 19:26:13 +00:00
|
|
|
#define TDF_SLICEEND TDF_SCHED2 /* Thread time slice is over. */
|
2004-10-05 21:10:44 +00:00
|
|
|
|
2008-07-28 17:25:24 +00:00
|
|
|
/* flags kept in ts_flags */
|
|
|
|
#define TSF_AFFINITY 0x0001 /* Has a non-"full" CPU set. */
|
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
#define SKE_RUNQ_PCPU(ts) \
|
|
|
|
((ts)->ts_runq != 0 && (ts)->ts_runq != &runq)
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2008-07-28 17:25:24 +00:00
|
|
|
#define THREAD_CAN_SCHED(td, cpu) \
|
|
|
|
CPU_ISSET((cpu), &(td)->td_cpuset->cs_mask)
|
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
static struct td_sched td_sched0;
|
2007-07-18 20:46:06 +00:00
|
|
|
struct mtx sched_lock;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2012-08-10 19:02:49 +00:00
|
|
|
static int realstathz = 127; /* stathz is sometimes 0 and run off of hz. */
|
2004-02-01 02:46:47 +00:00
|
|
|
static int sched_tdcnt; /* Total runnable threads in the system. */
|
2012-08-10 19:02:49 +00:00
|
|
|
static int sched_slice = 12; /* Thread run time before rescheduling. */
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static void setup_runqs(void);
|
2003-12-26 17:07:29 +00:00
|
|
|
static void schedcpu(void);
|
2004-01-25 08:00:04 +00:00
|
|
|
static void schedcpu_thread(void);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void sched_priority(struct thread *td, u_char prio);
|
2002-10-12 05:32:24 +00:00
|
|
|
static void sched_setup(void *dummy);
|
|
|
|
static void maybe_resched(struct thread *td);
|
2006-10-26 21:42:22 +00:00
|
|
|
static void updatepri(struct thread *td);
|
|
|
|
static void resetpriority(struct thread *td);
|
|
|
|
static void resetpriority_thread(struct thread *td);
|
2004-09-03 09:19:49 +00:00
|
|
|
#ifdef SMP
|
2008-07-28 17:25:24 +00:00
|
|
|
static int sched_pickcpu(struct thread *td);
|
2008-07-28 15:52:02 +00:00
|
|
|
static int forward_wakeup(int cpunum);
|
|
|
|
static void kick_other_cpu(int pri, int cpuid);
|
2004-09-03 09:19:49 +00:00
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static struct kproc_desc sched_kp = {
|
|
|
|
"schedcpu",
|
|
|
|
schedcpu_thread,
|
|
|
|
NULL
|
|
|
|
};
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(schedcpu, SI_SUB_RUN_SCHEDULER, SI_ORDER_FIRST, kproc_start,
|
|
|
|
&sched_kp);
|
|
|
|
SYSINIT(sched_setup, SI_SUB_RUN_QUEUE, SI_ORDER_FIRST, sched_setup, NULL);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2012-08-09 18:09:59 +00:00
|
|
|
static void sched_initticks(void *dummy);
|
|
|
|
SYSINIT(sched_initticks, SI_SUB_CLOCKS, SI_ORDER_THIRD, sched_initticks,
|
|
|
|
NULL);
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Global run queue.
|
|
|
|
*/
|
|
|
|
static struct runq runq;
|
2004-01-25 08:00:04 +00:00
|
|
|
|
|
|
|
#ifdef SMP
|
|
|
|
/*
|
|
|
|
* Per-CPU run queues
|
|
|
|
*/
|
|
|
|
static struct runq runq_pcpu[MAXCPU];
|
2008-07-28 17:25:24 +00:00
|
|
|
long runq_length[MAXCPU];
|
2011-04-30 22:30:18 +00:00
|
|
|
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
static cpuset_t idle_cpus_mask;
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
|
|
|
|
2010-09-11 07:08:22 +00:00
|
|
|
struct pcpuidlestat {
|
|
|
|
u_int idlecalls;
|
|
|
|
u_int oldidlecalls;
|
|
|
|
};
|
2010-11-22 19:32:54 +00:00
|
|
|
static DPCPU_DEFINE(struct pcpuidlestat, idlestat);
|
2010-09-11 07:08:22 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static void
|
|
|
|
setup_runqs(void)
|
|
|
|
{
|
|
|
|
#ifdef SMP
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < MAXCPU; ++i)
|
|
|
|
runq_init(&runq_pcpu[i]);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
runq_init(&runq);
|
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2012-08-10 19:02:49 +00:00
|
|
|
static int
|
|
|
|
sysctl_kern_quantum(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
|
|
|
int error, new_val, period;
|
|
|
|
|
|
|
|
period = 1000000 / realstathz;
|
|
|
|
new_val = period * sched_slice;
|
|
|
|
error = sysctl_handle_int(oidp, &new_val, 0, req);
|
2012-08-11 20:24:39 +00:00
|
|
|
if (error != 0 || req->newptr == NULL)
|
2012-08-10 19:02:49 +00:00
|
|
|
return (error);
|
|
|
|
if (new_val <= 0)
|
|
|
|
return (EINVAL);
|
2012-08-11 20:24:39 +00:00
|
|
|
sched_slice = imax(1, (new_val + period / 2) / period);
|
|
|
|
hogticks = imax(1, (2 * hz * sched_slice + realstathz / 2) /
|
|
|
|
realstathz);
|
2012-08-10 19:02:49 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2004-07-23 23:09:00 +00:00
|
|
|
SYSCTL_NODE(_kern, OID_AUTO, sched, CTLFLAG_RD, 0, "Scheduler");
|
2004-06-21 22:05:46 +00:00
|
|
|
|
2004-07-23 23:09:00 +00:00
|
|
|
SYSCTL_STRING(_kern_sched, OID_AUTO, name, CTLFLAG_RD, "4BSD", 0,
|
|
|
|
"Scheduler name");
|
2012-08-10 19:02:49 +00:00
|
|
|
SYSCTL_PROC(_kern_sched, OID_AUTO, quantum, CTLTYPE_INT | CTLFLAG_RW,
|
|
|
|
NULL, 0, sysctl_kern_quantum, "I",
|
2012-08-11 20:24:39 +00:00
|
|
|
"Quantum for timeshare threads in microseconds");
|
2012-08-09 18:09:59 +00:00
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, slice, CTLFLAG_RW, &sched_slice, 0,
|
2012-08-11 20:24:39 +00:00
|
|
|
"Quantum for timeshare threads in stathz ticks");
|
2004-09-03 09:15:10 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
/* Enable forwarding of wakeups to all other cpus */
|
2011-11-07 15:43:11 +00:00
|
|
|
static SYSCTL_NODE(_kern_sched, OID_AUTO, ipiwakeup, CTLFLAG_RD, NULL,
|
|
|
|
"Kernel SMP");
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2008-03-20 02:14:02 +00:00
|
|
|
static int runq_fuzz = 1;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, runq_fuzz, CTLFLAG_RW, &runq_fuzz, 0, "");
|
|
|
|
|
2004-09-05 02:19:53 +00:00
|
|
|
static int forward_wakeup_enabled = 1;
|
2004-09-03 07:42:31 +00:00
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, enabled, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_enabled, 0,
|
|
|
|
"Forwarding of wakeup to idle CPUs");
|
|
|
|
|
|
|
|
static int forward_wakeups_requested = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, requested, CTLFLAG_RD,
|
|
|
|
&forward_wakeups_requested, 0,
|
|
|
|
"Requests for Forwarding of wakeup to idle CPUs");
|
|
|
|
|
|
|
|
static int forward_wakeups_delivered = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, delivered, CTLFLAG_RD,
|
|
|
|
&forward_wakeups_delivered, 0,
|
|
|
|
"Completed Forwarding of wakeup to idle CPUs");
|
|
|
|
|
2004-09-05 02:19:53 +00:00
|
|
|
static int forward_wakeup_use_mask = 1;
|
2004-09-03 07:42:31 +00:00
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, usemask, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_mask, 0,
|
|
|
|
"Use the mask of idle cpus");
|
|
|
|
|
|
|
|
static int forward_wakeup_use_loop = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, useloop, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_loop, 0,
|
|
|
|
"Use a loop to find idle cpus");
|
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#endif
|
2006-12-06 06:34:57 +00:00
|
|
|
#if 0
|
2004-09-10 21:04:38 +00:00
|
|
|
static int sched_followon = 0;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, followon, CTLFLAG_RW,
|
|
|
|
&sched_followon, 0,
|
|
|
|
"allow threads to share a quantum");
|
2006-10-26 21:42:22 +00:00
|
|
|
#endif
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROVIDER_DEFINE(sched);
|
|
|
|
|
|
|
|
SDT_PROBE_DEFINE3(sched, , , change_pri, change-pri, "struct thread *",
|
|
|
|
"struct proc *", "uint8_t");
|
|
|
|
SDT_PROBE_DEFINE3(sched, , , dequeue, dequeue, "struct thread *",
|
|
|
|
"struct proc *", "void *");
|
|
|
|
SDT_PROBE_DEFINE4(sched, , , enqueue, enqueue, "struct thread *",
|
|
|
|
"struct proc *", "void *", "int");
|
|
|
|
SDT_PROBE_DEFINE4(sched, , , lend_pri, lend-pri, "struct thread *",
|
|
|
|
"struct proc *", "uint8_t", "struct thread *");
|
|
|
|
SDT_PROBE_DEFINE2(sched, , , load_change, load-change, "int", "int");
|
|
|
|
SDT_PROBE_DEFINE2(sched, , , off_cpu, off-cpu, "struct thread *",
|
|
|
|
"struct proc *");
|
|
|
|
SDT_PROBE_DEFINE(sched, , , on_cpu, on-cpu);
|
|
|
|
SDT_PROBE_DEFINE(sched, , , remain_cpu, remain-cpu);
|
|
|
|
SDT_PROBE_DEFINE2(sched, , , surrender, surrender, "struct thread *",
|
|
|
|
"struct proc *");
|
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
static __inline void
|
|
|
|
sched_load_add(void)
|
|
|
|
{
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_tdcnt++;
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt);
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE2(sched, , , load_change, NOCPU, sched_tdcnt);
|
2004-12-26 00:16:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static __inline void
|
|
|
|
sched_load_rem(void)
|
|
|
|
{
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_tdcnt--;
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt);
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE2(sched, , , load_change, NOCPU, sched_tdcnt);
|
2004-12-26 00:16:24 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Arrange to reschedule if necessary, taking the priorities and
|
|
|
|
* schedulers into account.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
maybe_resched(struct thread *td)
|
|
|
|
{
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2004-09-05 02:09:54 +00:00
|
|
|
if (td->td_priority < curthread->td_priority)
|
2003-02-17 09:55:10 +00:00
|
|
|
curthread->td_flags |= TDF_NEEDRESCHED;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2008-03-20 02:14:02 +00:00
|
|
|
/*
|
|
|
|
* This function is called when a thread is about to be put on run queue
|
|
|
|
* because it has been made runnable or its priority has been adjusted. It
|
|
|
|
* determines if the new thread should be immediately preempted to. If so,
|
|
|
|
* it switches to it and eventually returns true. If not, it returns false
|
|
|
|
* so that the caller may place the thread on an appropriate run queue.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
maybe_preempt(struct thread *td)
|
|
|
|
{
|
|
|
|
#ifdef PREEMPTION
|
|
|
|
struct thread *ctd;
|
|
|
|
int cpri, pri;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The new thread should not preempt the current thread if any of the
|
|
|
|
* following conditions are true:
|
|
|
|
*
|
|
|
|
* - The kernel is in the throes of crashing (panicstr).
|
|
|
|
* - The current thread has a higher (numerically lower) or
|
|
|
|
* equivalent priority. Note that this prevents curthread from
|
|
|
|
* trying to preempt to itself.
|
|
|
|
* - It is too early in the boot for context switches (cold is set).
|
|
|
|
* - The current thread has an inhibitor set or is in the process of
|
|
|
|
* exiting. In this case, the current thread is about to switch
|
|
|
|
* out anyways, so there's no point in preempting. If we did,
|
|
|
|
* the current thread would not be properly resumed as well, so
|
|
|
|
* just avoid that whole landmine.
|
|
|
|
* - If the new thread's priority is not a realtime priority and
|
|
|
|
* the current thread's priority is not an idle priority and
|
|
|
|
* FULL_PREEMPTION is disabled.
|
|
|
|
*
|
|
|
|
* If all of these conditions are false, but the current thread is in
|
|
|
|
* a nested critical section, then we have to defer the preemption
|
|
|
|
* until we exit the critical section. Otherwise, switch immediately
|
|
|
|
* to the new thread.
|
|
|
|
*/
|
|
|
|
ctd = curthread;
|
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
|
|
|
KASSERT((td->td_inhibitors == 0),
|
|
|
|
("maybe_preempt: trying to run inhibited thread"));
|
|
|
|
pri = td->td_priority;
|
|
|
|
cpri = ctd->td_priority;
|
|
|
|
if (panicstr != NULL || pri >= cpri || cold /* || dumping */ ||
|
|
|
|
TD_IS_INHIBITED(ctd))
|
|
|
|
return (0);
|
|
|
|
#ifndef FULL_PREEMPTION
|
|
|
|
if (pri > PRI_MAX_ITHD && cpri < PRI_MIN_IDLE)
|
|
|
|
return (0);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (ctd->td_critnest > 1) {
|
|
|
|
CTR1(KTR_PROC, "maybe_preempt: in critical section %d",
|
|
|
|
ctd->td_critnest);
|
|
|
|
ctd->td_owepreempt = 1;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Thread is runnable but not yet put on system run queue.
|
|
|
|
*/
|
|
|
|
MPASS(ctd->td_lock == td->td_lock);
|
|
|
|
MPASS(TD_ON_RUNQ(td));
|
|
|
|
TD_SET_RUNNING(td);
|
|
|
|
CTR3(KTR_PROC, "preempting to thread %p (pid %d, %s)\n", td,
|
|
|
|
td->td_proc->p_pid, td->td_name);
|
2008-04-17 04:20:10 +00:00
|
|
|
mi_switch(SW_INVOL | SW_PREEMPT | SWT_PREEMPT, td);
|
2008-03-20 02:14:02 +00:00
|
|
|
/*
|
|
|
|
* td's lock pointer may have changed. We have to return with it
|
|
|
|
* locked.
|
|
|
|
*/
|
|
|
|
spinlock_enter();
|
|
|
|
thread_unlock(ctd);
|
|
|
|
thread_lock(td);
|
|
|
|
spinlock_exit();
|
|
|
|
return (1);
|
|
|
|
#else
|
|
|
|
return (0);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Constants for digital decay and forget:
|
2006-10-26 21:42:22 +00:00
|
|
|
* 90% of (td_estcpu) usage in 5 * loadav time
|
2006-12-06 06:34:57 +00:00
|
|
|
* 95% of (ts_pctcpu) usage in 60 seconds (load insensitive)
|
2002-10-12 05:32:24 +00:00
|
|
|
* Note that, as ps(1) mentions, this can let percentages
|
|
|
|
* total over 100% (I've seen 137.9% for 3 processes).
|
|
|
|
*
|
2006-10-26 21:42:22 +00:00
|
|
|
* Note that schedclock() updates td_estcpu and p_cpticks asynchronously.
|
2002-10-12 05:32:24 +00:00
|
|
|
*
|
2006-10-26 21:42:22 +00:00
|
|
|
* We wish to decay away 90% of td_estcpu in (5 * loadavg) seconds.
|
2002-10-12 05:32:24 +00:00
|
|
|
* That is, the system wants to compute a value of decay such
|
|
|
|
* that the following for loop:
|
|
|
|
* for (i = 0; i < (5 * loadavg); i++)
|
2006-10-26 21:42:22 +00:00
|
|
|
* td_estcpu *= decay;
|
2002-10-12 05:32:24 +00:00
|
|
|
* will compute
|
2006-10-26 21:42:22 +00:00
|
|
|
* td_estcpu *= 0.1;
|
2002-10-12 05:32:24 +00:00
|
|
|
* for all values of loadavg:
|
|
|
|
*
|
|
|
|
* Mathematically this loop can be expressed by saying:
|
|
|
|
* decay ** (5 * loadavg) ~= .1
|
|
|
|
*
|
|
|
|
* The system computes decay as:
|
|
|
|
* decay = (2 * loadavg) / (2 * loadavg + 1)
|
|
|
|
*
|
|
|
|
* We wish to prove that the system's computation of decay
|
|
|
|
* will always fulfill the equation:
|
|
|
|
* decay ** (5 * loadavg) ~= .1
|
|
|
|
*
|
|
|
|
* If we compute b as:
|
|
|
|
* b = 2 * loadavg
|
|
|
|
* then
|
|
|
|
* decay = b / (b + 1)
|
|
|
|
*
|
|
|
|
* We now need to prove two things:
|
|
|
|
* 1) Given factor ** (5 * loadavg) ~= .1, prove factor == b/(b+1)
|
|
|
|
* 2) Given b/(b+1) ** power ~= .1, prove power == (5 * loadavg)
|
|
|
|
*
|
|
|
|
* Facts:
|
|
|
|
* For x close to zero, exp(x) =~ 1 + x, since
|
|
|
|
* exp(x) = 0! + x**1/1! + x**2/2! + ... .
|
|
|
|
* therefore exp(-1/b) =~ 1 - (1/b) = (b-1)/b.
|
|
|
|
* For x close to zero, ln(1+x) =~ x, since
|
|
|
|
* ln(1+x) = x - x**2/2 + x**3/3 - ... -1 < x < 1
|
|
|
|
* therefore ln(b/(b+1)) = ln(1 - 1/(b+1)) =~ -1/(b+1).
|
|
|
|
* ln(.1) =~ -2.30
|
|
|
|
*
|
|
|
|
* Proof of (1):
|
|
|
|
* Solve (factor)**(power) =~ .1 given power (5*loadav):
|
|
|
|
* solving for factor,
|
|
|
|
* ln(factor) =~ (-2.30/5*loadav), or
|
|
|
|
* factor =~ exp(-1/((5/2.30)*loadav)) =~ exp(-1/(2*loadav)) =
|
|
|
|
* exp(-1/b) =~ (b-1)/b =~ b/(b+1). QED
|
|
|
|
*
|
|
|
|
* Proof of (2):
|
|
|
|
* Solve (factor)**(power) =~ .1 given factor == (b/(b+1)):
|
|
|
|
* solving for power,
|
|
|
|
* power*ln(b/(b+1)) =~ -2.30, or
|
|
|
|
* power =~ 2.3 * (b + 1) = 4.6*loadav + 2.3 =~ 5*loadav. QED
|
|
|
|
*
|
|
|
|
* Actual power values for the implemented algorithm are as follows:
|
|
|
|
* loadav: 1 2 3 4
|
|
|
|
* power: 5.68 10.32 14.94 19.55
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* calculations for digital decay to forget 90% of usage in 5*loadav sec */
|
|
|
|
#define loadfactor(loadav) (2 * (loadav))
|
|
|
|
#define decay_cpu(loadfac, cpu) (((loadfac) * (cpu)) / ((loadfac) + FSCALE))
|
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
/* decay 95% of `ts_pctcpu' in 60 seconds; see CCPU_SHIFT before changing */
|
2002-10-12 05:32:24 +00:00
|
|
|
static fixpt_t ccpu = 0.95122942450071400909 * FSCALE; /* exp(-1/20) */
|
2011-01-13 18:20:37 +00:00
|
|
|
SYSCTL_UINT(_kern, OID_AUTO, ccpu, CTLFLAG_RD, &ccpu, 0, "");
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If `ccpu' is not equal to `exp(-1/20)' and you still want to use the
|
|
|
|
* faster/more-accurate formula, you'll have to estimate CCPU_SHIFT below
|
|
|
|
* and possibly adjust FSHIFT in "param.h" so that (FSHIFT >= CCPU_SHIFT).
|
|
|
|
*
|
|
|
|
* To estimate CCPU_SHIFT for exp(-1/20), the following formula was used:
|
|
|
|
* 1 - exp(-1/20) ~= 0.0487 ~= 0.0488 == 1 (fixed pt, *11* bits).
|
|
|
|
*
|
|
|
|
* If you don't want to bother with the faster/more-accurate formula, you
|
|
|
|
* can set CCPU_SHIFT to (FSHIFT + 1) which will use a slower/less-accurate
|
|
|
|
* (more general) method of calculating the %age of CPU used by a process.
|
|
|
|
*/
|
|
|
|
#define CCPU_SHIFT 11
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recompute process priorities, every hz ticks.
|
|
|
|
* MP-safe, called without the Giant mutex.
|
|
|
|
*/
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
2003-12-26 17:07:29 +00:00
|
|
|
schedcpu(void)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
|
|
|
register fixpt_t loadfac = loadfactor(averunnable.ldavg[0]);
|
|
|
|
struct thread *td;
|
|
|
|
struct proc *p;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2012-08-09 18:09:59 +00:00
|
|
|
int awake;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
sx_slock(&allproc_lock);
|
|
|
|
FOREACH_PROC_IN_SYSTEM(p) {
|
2008-03-19 06:19:01 +00:00
|
|
|
PROC_LOCK(p);
|
2011-04-06 17:47:22 +00:00
|
|
|
if (p->p_state == PRS_NEW) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
continue;
|
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
2002-10-12 05:32:24 +00:00
|
|
|
awake = 0;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* Increment sleep time (if sleeping). We
|
|
|
|
* ignore overflow, as above.
|
|
|
|
*/
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* The td_sched slptimes are not touched in wakeup
|
|
|
|
* because the thread may not HAVE everything in
|
|
|
|
* memory? XXX I think this is out of date.
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2007-01-23 08:46:51 +00:00
|
|
|
if (TD_ON_RUNQ(td)) {
|
2006-10-26 21:42:22 +00:00
|
|
|
awake = 1;
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags &= ~TDF_DIDRUN;
|
2007-01-23 08:46:51 +00:00
|
|
|
} else if (TD_IS_RUNNING(td)) {
|
2006-10-26 21:42:22 +00:00
|
|
|
awake = 1;
|
2008-03-20 05:51:16 +00:00
|
|
|
/* Do not clear TDF_DIDRUN */
|
|
|
|
} else if (td->td_flags & TDF_DIDRUN) {
|
2006-10-26 21:42:22 +00:00
|
|
|
awake = 1;
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags &= ~TDF_DIDRUN;
|
2006-10-26 21:42:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* ts_pctcpu is only for ps and ttyinfo().
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_pctcpu = (ts->ts_pctcpu * ccpu) >> FSHIFT;
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* If the td_sched has been idle the entire second,
|
2006-10-26 21:42:22 +00:00
|
|
|
* stop recalculating its priority until
|
|
|
|
* it wakes up.
|
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
if (ts->ts_cpticks != 0) {
|
2006-10-26 21:42:22 +00:00
|
|
|
#if (FSHIFT >= CCPU_SHIFT)
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_pctcpu += (realstathz == 100)
|
|
|
|
? ((fixpt_t) ts->ts_cpticks) <<
|
|
|
|
(FSHIFT - CCPU_SHIFT) :
|
|
|
|
100 * (((fixpt_t) ts->ts_cpticks)
|
|
|
|
<< (FSHIFT - CCPU_SHIFT)) / realstathz;
|
2006-10-26 21:42:22 +00:00
|
|
|
#else
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_pctcpu += ((FSCALE - ccpu) *
|
|
|
|
(ts->ts_cpticks *
|
|
|
|
FSCALE / realstathz)) >> FSHIFT;
|
2006-10-26 21:42:22 +00:00
|
|
|
#endif
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_cpticks = 0;
|
2006-11-14 05:48:27 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
/*
|
2006-10-26 21:42:22 +00:00
|
|
|
* If there are ANY running threads in this process,
|
2002-10-12 05:32:24 +00:00
|
|
|
* then don't count it as sleeping.
|
2008-07-28 15:52:02 +00:00
|
|
|
* XXX: this is broken.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
|
|
|
if (awake) {
|
2007-09-21 04:10:23 +00:00
|
|
|
if (ts->ts_slptime > 1) {
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* In an ideal world, this should not
|
|
|
|
* happen, because whoever woke us
|
|
|
|
* up from the long sleep should have
|
|
|
|
* unwound the slptime and reset our
|
|
|
|
* priority before we run at the stale
|
|
|
|
* priority. Should KASSERT at some
|
|
|
|
* point when all the cases are fixed.
|
|
|
|
*/
|
2006-10-26 21:42:22 +00:00
|
|
|
updatepri(td);
|
|
|
|
}
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime = 0;
|
2006-10-26 21:42:22 +00:00
|
|
|
} else
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime++;
|
|
|
|
if (ts->ts_slptime > 1) {
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
continue;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_estcpu = decay_cpu(loadfac, td->td_estcpu);
|
|
|
|
resetpriority(td);
|
|
|
|
resetpriority_thread(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2008-03-19 06:19:01 +00:00
|
|
|
PROC_UNLOCK(p);
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
sx_sunlock(&allproc_lock);
|
2003-12-26 17:07:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Main loop for a kthread that executes schedcpu once a second.
|
|
|
|
*/
|
|
|
|
static void
|
2004-01-25 08:00:04 +00:00
|
|
|
schedcpu_thread(void)
|
2003-12-26 17:07:29 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
schedcpu();
|
2007-02-27 17:23:29 +00:00
|
|
|
pause("-", hz);
|
2003-12-26 17:07:29 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recalculate the priority of a process after it has slept for a while.
|
2006-10-26 21:42:22 +00:00
|
|
|
* For all load averages >= 1 and max td_estcpu of 255, sleeping for at
|
|
|
|
* least six times the loadfactor will decay td_estcpu to zero.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
|
|
|
static void
|
2006-10-26 21:42:22 +00:00
|
|
|
updatepri(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2007-09-21 04:10:23 +00:00
|
|
|
struct td_sched *ts;
|
|
|
|
fixpt_t loadfac;
|
|
|
|
unsigned int newcpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2007-09-21 04:10:23 +00:00
|
|
|
ts = td->td_sched;
|
2003-08-15 21:29:06 +00:00
|
|
|
loadfac = loadfactor(averunnable.ldavg[0]);
|
2007-09-21 04:10:23 +00:00
|
|
|
if (ts->ts_slptime > 5 * loadfac)
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_estcpu = 0;
|
2002-10-12 05:32:24 +00:00
|
|
|
else {
|
2006-10-26 21:42:22 +00:00
|
|
|
newcpu = td->td_estcpu;
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime--; /* was incremented in schedcpu() */
|
|
|
|
while (newcpu && --ts->ts_slptime)
|
2002-10-12 05:32:24 +00:00
|
|
|
newcpu = decay_cpu(loadfac, newcpu);
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_estcpu = newcpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute the priority of a process when running in user mode.
|
|
|
|
* Arrange to reschedule if the resulting priority is better
|
|
|
|
* than that of the current process.
|
|
|
|
*/
|
|
|
|
static void
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
|
|
|
register unsigned int newpriority;
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
if (td->td_pri_class == PRI_TIMESHARE) {
|
|
|
|
newpriority = PUSER + td->td_estcpu / INVERSE_ESTCPU_WEIGHT +
|
|
|
|
NICE_WEIGHT * (td->td_proc->p_nice - PRIO_MIN);
|
2002-10-12 05:32:24 +00:00
|
|
|
newpriority = min(max(newpriority, PRI_MIN_TIMESHARE),
|
|
|
|
PRI_MAX_TIMESHARE);
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_user_prio(td, newpriority);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* Update the thread's priority when the associated process's user
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
* priority changes.
|
|
|
|
*/
|
|
|
|
static void
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority_thread(struct thread *td)
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
/* Only change threads with a time sharing user priority. */
|
|
|
|
if (td->td_priority < PRI_MIN_TIMESHARE ||
|
|
|
|
td->td_priority > PRI_MAX_TIMESHARE)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* XXX the whole needresched thing is broken, but not silly. */
|
|
|
|
maybe_resched(td);
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_prio(td, td->td_user_pri);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
|
|
|
sched_setup(void *dummy)
|
|
|
|
{
|
2003-08-15 21:29:06 +00:00
|
|
|
|
2012-08-10 19:02:49 +00:00
|
|
|
setup_runqs();
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-02-01 02:46:47 +00:00
|
|
|
/* Account for thread0. */
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2012-08-09 18:09:59 +00:00
|
|
|
/*
|
2012-08-10 19:02:49 +00:00
|
|
|
* This routine determines time constants after stathz and hz are setup.
|
2012-08-09 18:09:59 +00:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
sched_initticks(void *dummy)
|
|
|
|
{
|
|
|
|
|
|
|
|
realstathz = stathz ? stathz : hz;
|
|
|
|
sched_slice = realstathz / 10; /* ~100ms */
|
2012-08-11 20:24:39 +00:00
|
|
|
hogticks = imax(1, (2 * hz * sched_slice + realstathz / 2) /
|
|
|
|
realstathz);
|
2012-08-09 18:09:59 +00:00
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/* External interfaces start here */
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
/*
|
|
|
|
* Very early in the boot some setup of scheduler-specific
|
2005-04-15 14:01:43 +00:00
|
|
|
* parts of proc0 and of some scheduler resources needs to be done.
|
2004-09-05 02:09:54 +00:00
|
|
|
* Called from:
|
|
|
|
* proc0_init()
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
schedinit(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Set up the scheduler specific parts of proc0.
|
|
|
|
*/
|
|
|
|
proc0.p_sched = NULL; /* XXX */
|
2006-12-06 06:34:57 +00:00
|
|
|
thread0.td_sched = &td_sched0;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread0.td_lock = &sched_lock;
|
2012-08-09 18:09:59 +00:00
|
|
|
td_sched0.ts_slice = sched_slice;
|
2007-07-18 20:46:06 +00:00
|
|
|
mtx_init(&sched_lock, "sched lock", NULL, MTX_SPIN | MTX_RECURSE);
|
2004-09-05 02:09:54 +00:00
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
int
|
|
|
|
sched_runnable(void)
|
|
|
|
{
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
|
|
|
return runq_check(&runq) + runq_check(&runq_pcpu[PCPU_GET(cpuid)]);
|
|
|
|
#else
|
|
|
|
return runq_check(&runq);
|
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
int
|
2002-10-12 05:32:24 +00:00
|
|
|
sched_rr_interval(void)
|
|
|
|
{
|
2012-08-09 18:09:59 +00:00
|
|
|
|
|
|
|
/* Convert sched_slice from stathz to hz. */
|
2012-08-11 20:24:39 +00:00
|
|
|
return (imax(1, (sched_slice * hz + realstathz / 2) / realstathz));
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We adjust the priority of the current process. The priority of
|
|
|
|
* a process gets worse as it accumulates CPU time. The cpu usage
|
2006-10-26 21:42:22 +00:00
|
|
|
* estimator (td_estcpu) is increased here. resetpriority() will
|
|
|
|
* compute a different priority each time td_estcpu increases by
|
2002-10-12 05:32:24 +00:00
|
|
|
* INVERSE_ESTCPU_WEIGHT
|
|
|
|
* (until MAXPRI is reached). The cpu usage estimator ramps up
|
|
|
|
* quite quickly when the process is running (linearly), and decays
|
|
|
|
* away exponentially, at a rate which is proportionally slower when
|
|
|
|
* the system is busy. The basic principle is that the system will
|
|
|
|
* 90% forget that the process used a lot of CPU time in 5 * loadav
|
|
|
|
* seconds. This causes the system to favor processes which haven't
|
|
|
|
* run much recently, and to round-robin among other processes.
|
|
|
|
*/
|
|
|
|
void
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_clock(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2010-09-11 07:08:22 +00:00
|
|
|
struct pcpuidlestat *stat;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2003-04-11 03:39:48 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
|
|
|
|
|
|
|
ts->ts_cpticks++;
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_estcpu = ESTCPULIM(td->td_estcpu + 1);
|
|
|
|
if ((td->td_estcpu % INVERSE_ESTCPU_WEIGHT) == 0) {
|
|
|
|
resetpriority(td);
|
|
|
|
resetpriority_thread(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2007-10-27 22:07:40 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Force a context switch if the current thread has used up a full
|
2012-08-10 19:02:49 +00:00
|
|
|
* time slice (default is 100ms).
|
2007-10-27 22:07:40 +00:00
|
|
|
*/
|
2012-08-10 19:02:49 +00:00
|
|
|
if (!TD_IS_IDLETHREAD(td) && --ts->ts_slice <= 0) {
|
2012-08-09 18:09:59 +00:00
|
|
|
ts->ts_slice = sched_slice;
|
2012-08-09 19:26:13 +00:00
|
|
|
td->td_flags |= TDF_NEEDRESCHED | TDF_SLICEEND;
|
2012-08-09 18:09:59 +00:00
|
|
|
}
|
2010-09-11 07:08:22 +00:00
|
|
|
|
|
|
|
stat = DPCPU_PTR(idlestat);
|
|
|
|
stat->oldidlecalls = stat->idlecalls;
|
|
|
|
stat->idlecalls = 0;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2003-08-15 21:29:06 +00:00
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
2008-07-28 15:52:02 +00:00
|
|
|
* Charge child's scheduling CPU usage to parent.
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2004-07-18 23:36:13 +00:00
|
|
|
sched_exit(struct proc *p, struct thread *td)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(td), "proc exit",
|
2011-08-26 18:00:07 +00:00
|
|
|
"prio:%d", td->td_priority);
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2008-03-19 06:19:01 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2006-12-06 06:34:57 +00:00
|
|
|
sched_exit_thread(FIRST_THREAD_IN_PROC(p), td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2003-04-11 03:39:48 +00:00
|
|
|
sched_exit_thread(struct thread *td, struct thread *child)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(child), "exit",
|
2011-08-26 18:00:07 +00:00
|
|
|
"prio:%d", child->td_priority);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
td->td_estcpu = ESTCPULIM(td->td_estcpu + child->td_estcpu);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2009-11-03 16:46:52 +00:00
|
|
|
thread_lock(child);
|
|
|
|
if ((child->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2009-11-03 16:46:52 +00:00
|
|
|
thread_unlock(child);
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork(struct thread *td, struct thread *childtd)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork_thread(td, childtd);
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork_thread(struct thread *td, struct thread *childtd)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2008-03-20 03:06:33 +00:00
|
|
|
struct td_sched *ts;
|
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
childtd->td_estcpu = td->td_estcpu;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
childtd->td_lock = &sched_lock;
|
2008-03-02 21:34:57 +00:00
|
|
|
childtd->td_cpuset = cpuset_ref(td->td_cpuset);
|
2011-01-06 22:24:00 +00:00
|
|
|
childtd->td_priority = childtd->td_base_pri;
|
2008-03-20 03:06:33 +00:00
|
|
|
ts = childtd->td_sched;
|
|
|
|
bzero(ts, sizeof(*ts));
|
2008-07-28 17:25:24 +00:00
|
|
|
ts->ts_flags |= (td->td_sched->ts_flags & TSF_AFFINITY);
|
2012-08-09 18:09:59 +00:00
|
|
|
ts->ts_slice = 1;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2004-06-16 00:26:31 +00:00
|
|
|
sched_nice(struct proc *p, int nice)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
struct thread *td;
|
2003-04-22 20:50:38 +00:00
|
|
|
|
2004-06-16 00:26:31 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
|
|
|
p->p_nice = nice;
|
2006-10-26 21:42:22 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority(td);
|
|
|
|
resetpriority_thread(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_class(struct thread *td, int class)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_pri_class = class;
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* Adjust the priority of a thread.
|
|
|
|
*/
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void
|
|
|
|
sched_priority(struct thread *td, u_char prio)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
|
|
|
|
KTR_POINT3(KTR_SCHED, "thread", sched_tdname(td), "priority change",
|
|
|
|
"prio:%d", td->td_priority, "new prio:%d", prio, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE3(sched, , , change_pri, td, td->td_proc, prio);
|
2009-01-17 07:17:57 +00:00
|
|
|
if (td != curthread && prio > td->td_priority) {
|
|
|
|
KTR_POINT3(KTR_SCHED, "thread", sched_tdname(curthread),
|
|
|
|
"lend prio", "prio:%d", td->td_priority, "new prio:%d",
|
|
|
|
prio, KTR_ATTR_LINKED, sched_tdname(td));
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE4(sched, , , lend_pri, td, td->td_proc, prio,
|
|
|
|
curthread);
|
2009-01-17 07:17:57 +00:00
|
|
|
}
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
if (td->td_priority == prio)
|
|
|
|
return;
|
2007-01-23 08:46:51 +00:00
|
|
|
td->td_priority = prio;
|
2008-03-20 05:51:16 +00:00
|
|
|
if (TD_ON_RUNQ(td) && td->td_rqindex != (prio / RQ_PPQ)) {
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_rem(td);
|
|
|
|
sched_add(td, SRQ_BORING);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
/*
|
|
|
|
* Update a thread's priority when it is lent another thread's
|
|
|
|
* priority.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_lend_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
|
|
|
|
td->td_flags |= TDF_BORROWING;
|
|
|
|
sched_priority(td, prio);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore a thread's priority when priority propagation is
|
|
|
|
* over. The prio argument is the minimum priority the thread
|
|
|
|
* needs to have to satisfy other possible priority lending
|
|
|
|
* requests. If the thread's regulary priority is less
|
|
|
|
* important than prio the thread will keep a priority boost
|
|
|
|
* of prio.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_unlend_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
u_char base_pri;
|
|
|
|
|
|
|
|
if (td->td_base_pri >= PRI_MIN_TIMESHARE &&
|
|
|
|
td->td_base_pri <= PRI_MAX_TIMESHARE)
|
2006-10-26 21:42:22 +00:00
|
|
|
base_pri = td->td_user_pri;
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
else
|
|
|
|
base_pri = td->td_base_pri;
|
|
|
|
if (prio >= base_pri) {
|
|
|
|
td->td_flags &= ~TDF_BORROWING;
|
|
|
|
sched_prio(td, base_pri);
|
|
|
|
} else
|
|
|
|
sched_lend_prio(td, prio);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
u_char oldprio;
|
|
|
|
|
|
|
|
/* First, update the base priority. */
|
|
|
|
td->td_base_pri = prio;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the thread is borrowing another thread's priority, don't ever
|
|
|
|
* lower the priority.
|
|
|
|
*/
|
|
|
|
if (td->td_flags & TDF_BORROWING && td->td_priority < prio)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Change the real priority. */
|
|
|
|
oldprio = td->td_priority;
|
|
|
|
sched_priority(td, prio);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the thread is on a turnstile, then let the turnstile update
|
|
|
|
* its state.
|
|
|
|
*/
|
|
|
|
if (TD_ON_LOCK(td) && oldprio != prio)
|
|
|
|
turnstile_adjust(td, oldprio);
|
|
|
|
}
|
|
|
|
|
2006-08-25 06:12:53 +00:00
|
|
|
void
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_user_prio(struct thread *td, u_char prio)
|
2006-08-25 06:12:53 +00:00
|
|
|
{
|
|
|
|
|
2007-12-11 08:25:36 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_base_user_pri = prio;
|
2010-12-09 02:42:02 +00:00
|
|
|
if (td->td_lend_user_pri <= prio)
|
2006-11-11 13:11:29 +00:00
|
|
|
return;
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_user_pri = prio;
|
2006-08-25 06:12:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_lend_user_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
|
2007-12-11 08:25:36 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2010-12-09 02:42:02 +00:00
|
|
|
td->td_lend_user_pri = prio;
|
2010-12-29 09:26:46 +00:00
|
|
|
td->td_user_pri = min(prio, td->td_base_user_pri);
|
|
|
|
if (td->td_priority > td->td_user_pri)
|
|
|
|
sched_prio(td, td->td_user_pri);
|
|
|
|
else if (td->td_priority != td->td_user_pri)
|
|
|
|
td->td_flags |= TDF_NEEDRESCHED;
|
2006-08-25 06:12:53 +00:00
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2008-03-12 06:31:06 +00:00
|
|
|
sched_sleep(struct thread *td, int pri)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2003-04-23 18:51:05 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-09-21 04:10:23 +00:00
|
|
|
td->td_slptick = ticks;
|
|
|
|
td->td_sched->ts_slptime = 0;
|
2011-01-14 17:06:54 +00:00
|
|
|
if (pri != 0 && PRI_BASE(td->td_pri_class) == PRI_TIMESHARE)
|
2008-03-12 06:31:06 +00:00
|
|
|
sched_prio(td, pri);
|
2009-12-31 18:52:58 +00:00
|
|
|
if (TD_IS_SUSPENDED(td) || pri >= PSOCK)
|
2008-03-12 06:31:06 +00:00
|
|
|
td->td_flags |= TDF_CANSWAP;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2004-09-10 21:04:38 +00:00
|
|
|
sched_switch(struct thread *td, struct thread *newtd, int flags)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2010-01-23 15:54:21 +00:00
|
|
|
struct mtx *tmtx;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2002-10-12 05:32:24 +00:00
|
|
|
struct proc *p;
|
2012-08-09 19:26:13 +00:00
|
|
|
int preempted;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2010-01-23 15:54:21 +00:00
|
|
|
tmtx = NULL;
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
2002-10-12 05:32:24 +00:00
|
|
|
p = td->td_proc;
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2008-07-28 15:52:02 +00:00
|
|
|
|
|
|
|
/*
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
* Switch to the sched lock to fix things up and pick
|
|
|
|
* a new thread.
|
2010-01-23 15:54:21 +00:00
|
|
|
* Block the td_lock in order to avoid breaking the critical path.
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
*/
|
|
|
|
if (td->td_lock != &sched_lock) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
2010-01-23 15:54:21 +00:00
|
|
|
tmtx = thread_lock_block(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2004-09-10 21:04:38 +00:00
|
|
|
|
2003-04-10 17:35:44 +00:00
|
|
|
td->td_lastcpu = td->td_oncpu;
|
2012-08-09 19:26:13 +00:00
|
|
|
preempted = !(td->td_flags & TDF_SLICEEND);
|
|
|
|
td->td_flags &= ~(TDF_NEEDRESCHED | TDF_SLICEEND);
|
2005-04-08 03:37:53 +00:00
|
|
|
td->td_owepreempt = 0;
|
2004-02-01 02:46:47 +00:00
|
|
|
td->td_oncpu = NOCPU;
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* At the last moment, if this thread is still marked RUNNING,
|
|
|
|
* then put it back on the run queue as it has not been suspended
|
2004-07-02 19:09:50 +00:00
|
|
|
* or stopped or any thing else similar. We never put the idle
|
|
|
|
* threads on the run queue, however.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
2007-02-02 05:14:22 +00:00
|
|
|
if (td->td_flags & TDF_IDLETD) {
|
2004-07-02 19:09:50 +00:00
|
|
|
TD_SET_CAN_RUN(td);
|
2007-02-02 05:14:22 +00:00
|
|
|
#ifdef SMP
|
2011-06-13 13:28:31 +00:00
|
|
|
CPU_CLR(PCPU_GET(cpuid), &idle_cpus_mask);
|
2007-02-02 05:14:22 +00:00
|
|
|
#endif
|
|
|
|
} else {
|
2004-09-05 02:09:54 +00:00
|
|
|
if (TD_IS_RUNNING(td)) {
|
2006-12-06 06:34:57 +00:00
|
|
|
/* Put us back on the run queue. */
|
2012-08-09 19:26:13 +00:00
|
|
|
sched_add(td, preempted ?
|
2004-10-05 22:03:10 +00:00
|
|
|
SRQ_OURSELF|SRQ_YIELDING|SRQ_PREEMPTED :
|
|
|
|
SRQ_OURSELF|SRQ_YIELDING);
|
2004-09-05 02:09:54 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2004-10-05 22:03:10 +00:00
|
|
|
if (newtd) {
|
2008-07-28 15:52:02 +00:00
|
|
|
/*
|
2004-10-05 22:03:10 +00:00
|
|
|
* The thread we are about to run needs to be counted
|
|
|
|
* as if it had been added to the run queue and selected.
|
|
|
|
* It came from:
|
|
|
|
* * A preemption
|
2008-07-28 15:52:02 +00:00
|
|
|
* * An upcall
|
2004-10-05 22:03:10 +00:00
|
|
|
* * A followon
|
|
|
|
*/
|
|
|
|
KASSERT((newtd->td_inhibitors == 0),
|
2006-12-31 15:56:04 +00:00
|
|
|
("trying to run inhibited thread"));
|
2008-03-20 05:51:16 +00:00
|
|
|
newtd->td_flags |= TDF_DIDRUN;
|
2004-10-05 22:03:10 +00:00
|
|
|
TD_SET_RUNNING(newtd);
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((newtd->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2004-10-05 22:03:10 +00:00
|
|
|
} else {
|
2004-07-02 19:09:50 +00:00
|
|
|
newtd = choosethread();
|
2010-01-24 18:16:38 +00:00
|
|
|
MPASS(newtd->td_lock == &sched_lock);
|
2004-10-05 22:03:10 +00:00
|
|
|
}
|
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
if (td != newtd) {
|
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
if (PMC_PROC_IS_USING_PMCS(td->td_proc))
|
|
|
|
PMC_SWITCH_CONTEXT(td, PMC_FN_CSW_OUT);
|
|
|
|
#endif
|
2012-05-15 01:30:25 +00:00
|
|
|
|
|
|
|
SDT_PROBE2(sched, , , off_cpu, td, td->td_proc);
|
|
|
|
|
2007-02-02 05:14:22 +00:00
|
|
|
/* I feel sleepy */
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_release_lock(&sched_lock.lock_object);
|
2008-05-25 01:44:58 +00:00
|
|
|
#ifdef KDTRACE_HOOKS
|
|
|
|
/*
|
|
|
|
* If DTrace has set the active vtime enum to anything
|
|
|
|
* other than INACTIVE (0), then it should have set the
|
|
|
|
* function to call.
|
|
|
|
*/
|
|
|
|
if (dtrace_vtime_active)
|
|
|
|
(*dtrace_vtime_switch_func)(newtd);
|
|
|
|
#endif
|
|
|
|
|
2010-01-23 15:54:21 +00:00
|
|
|
cpu_switch(td, newtd, tmtx != NULL ? tmtx : td->td_lock);
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_obtain_lock_success(&sched_lock.lock_object,
|
|
|
|
0, 0, __FILE__, __LINE__);
|
2007-02-02 05:14:22 +00:00
|
|
|
/*
|
|
|
|
* Where am I? What year is it?
|
|
|
|
* We are in the same thread that went to sleep above,
|
2008-07-28 15:52:02 +00:00
|
|
|
* but any amount of time may have passed. All our context
|
2007-02-02 05:14:22 +00:00
|
|
|
* will still be available as will local variables.
|
|
|
|
* PCPU values however may have changed as we may have
|
|
|
|
* changed CPU so don't trust cached values of them.
|
|
|
|
* New threads will go to fork_exit() instead of here
|
|
|
|
* so if you change things here you may need to change
|
|
|
|
* things there too.
|
2008-07-28 15:52:02 +00:00
|
|
|
*
|
2007-02-02 05:14:22 +00:00
|
|
|
* If the thread above was exiting it will never wake
|
|
|
|
* up again here, so either it has saved everything it
|
|
|
|
* needed to, or the thread_wait() or wait() will
|
|
|
|
* need to reap it.
|
|
|
|
*/
|
2012-05-15 01:30:25 +00:00
|
|
|
|
|
|
|
SDT_PROBE0(sched, , , on_cpu);
|
2005-04-19 04:01:25 +00:00
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
if (PMC_PROC_IS_USING_PMCS(td->td_proc))
|
|
|
|
PMC_SWITCH_CONTEXT(td, PMC_FN_CSW_IN);
|
|
|
|
#endif
|
2012-05-15 01:30:25 +00:00
|
|
|
} else
|
|
|
|
SDT_PROBE0(sched, , , remain_cpu);
|
2005-04-19 04:01:25 +00:00
|
|
|
|
2007-02-02 05:14:22 +00:00
|
|
|
#ifdef SMP
|
|
|
|
if (td->td_flags & TDF_IDLETD)
|
2011-06-13 13:28:31 +00:00
|
|
|
CPU_SET(PCPU_GET(cpuid), &idle_cpus_mask);
|
2007-02-02 05:14:22 +00:00
|
|
|
#endif
|
2003-10-16 08:53:46 +00:00
|
|
|
sched_lock.mtx_lock = (uintptr_t)td;
|
|
|
|
td->td_oncpu = PCPU_GET(cpuid);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
MPASS(td->td_lock == &sched_lock);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_wakeup(struct thread *td)
|
|
|
|
{
|
2007-09-21 04:10:23 +00:00
|
|
|
struct td_sched *ts;
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-09-21 04:10:23 +00:00
|
|
|
ts = td->td_sched;
|
2008-03-12 06:31:06 +00:00
|
|
|
td->td_flags &= ~TDF_CANSWAP;
|
2007-09-21 04:10:23 +00:00
|
|
|
if (ts->ts_slptime > 1) {
|
2006-10-26 21:42:22 +00:00
|
|
|
updatepri(td);
|
|
|
|
resetpriority(td);
|
|
|
|
}
|
2010-01-08 14:55:11 +00:00
|
|
|
td->td_slptick = 0;
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime = 0;
|
2012-08-09 18:09:59 +00:00
|
|
|
ts->ts_slice = sched_slice;
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_add(td, SRQ_BORING);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
static int
|
2008-07-28 15:52:02 +00:00
|
|
|
forward_wakeup(int cpunum)
|
2004-09-03 07:42:31 +00:00
|
|
|
{
|
|
|
|
struct pcpu *pc;
|
2011-06-13 13:28:31 +00:00
|
|
|
cpuset_t dontuse, map, map2;
|
|
|
|
u_int id, me;
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
int iscpuset;
|
2004-09-03 07:42:31 +00:00
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
CTR0(KTR_RUNQ, "forward_wakeup()");
|
2004-09-03 07:42:31 +00:00
|
|
|
|
|
|
|
if ((!forward_wakeup_enabled) ||
|
|
|
|
(forward_wakeup_use_mask == 0 && forward_wakeup_use_loop == 0))
|
|
|
|
return (0);
|
|
|
|
if (!smp_started || cold || panicstr)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
forward_wakeups_requested++;
|
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
/*
|
|
|
|
* Check the idle mask we received against what we calculated
|
|
|
|
* before in the old version.
|
2004-09-03 07:42:31 +00:00
|
|
|
*/
|
2011-06-13 13:28:31 +00:00
|
|
|
me = PCPU_GET(cpuid);
|
2008-07-28 15:52:02 +00:00
|
|
|
|
|
|
|
/* Don't bother if we should be doing it ourself. */
|
2011-06-13 13:28:31 +00:00
|
|
|
if (CPU_ISSET(me, &idle_cpus_mask) &&
|
|
|
|
(cpunum == NOCPU || me == cpunum))
|
2004-09-03 07:42:31 +00:00
|
|
|
return (0);
|
|
|
|
|
2011-06-13 13:28:31 +00:00
|
|
|
CPU_SETOF(me, &dontuse);
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_OR(&dontuse, &stopped_cpus);
|
|
|
|
CPU_OR(&dontuse, &hlt_cpus_mask);
|
|
|
|
CPU_ZERO(&map2);
|
2004-09-03 07:42:31 +00:00
|
|
|
if (forward_wakeup_use_loop) {
|
2011-05-31 15:11:43 +00:00
|
|
|
STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) {
|
2011-06-13 13:28:31 +00:00
|
|
|
id = pc->pc_cpuid;
|
|
|
|
if (!CPU_ISSET(id, &dontuse) &&
|
2004-09-03 07:42:31 +00:00
|
|
|
pc->pc_curthread == pc->pc_idlethread) {
|
2011-06-13 13:28:31 +00:00
|
|
|
CPU_SET(id, &map2);
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (forward_wakeup_use_mask) {
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
map = idle_cpus_mask;
|
|
|
|
CPU_NAND(&map, &dontuse);
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
/* If they are both on, compare and use loop if different. */
|
2004-09-03 07:42:31 +00:00
|
|
|
if (forward_wakeup_use_loop) {
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (CPU_CMP(&map, &map2)) {
|
2011-04-30 23:28:07 +00:00
|
|
|
printf("map != map2, loop method preferred\n");
|
|
|
|
map = map2;
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
2011-04-30 23:28:07 +00:00
|
|
|
map = map2;
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
|
|
|
|
/* If we only allow a specific CPU, then mask off all the others. */
|
2004-09-03 07:42:31 +00:00
|
|
|
if (cpunum != NOCPU) {
|
|
|
|
KASSERT((cpunum <= mp_maxcpus),("forward_wakeup: bad cpunum."));
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
iscpuset = CPU_ISSET(cpunum, &map);
|
|
|
|
if (iscpuset == 0)
|
|
|
|
CPU_ZERO(&map);
|
|
|
|
else
|
|
|
|
CPU_SETOF(cpunum, &map);
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (!CPU_EMPTY(&map)) {
|
2004-09-03 07:42:31 +00:00
|
|
|
forward_wakeups_delivered++;
|
2011-05-31 15:11:43 +00:00
|
|
|
STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) {
|
2011-06-13 13:28:31 +00:00
|
|
|
id = pc->pc_cpuid;
|
|
|
|
if (!CPU_ISSET(id, &map))
|
2010-09-11 07:08:22 +00:00
|
|
|
continue;
|
|
|
|
if (cpu_idle_wakeup(pc->pc_cpuid))
|
2011-06-13 13:28:31 +00:00
|
|
|
CPU_CLR(id, &map);
|
2010-09-11 07:08:22 +00:00
|
|
|
}
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (!CPU_EMPTY(&map))
|
2010-09-11 07:08:22 +00:00
|
|
|
ipi_selected(map, IPI_AST);
|
2004-09-03 07:42:31 +00:00
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
if (cpunum == NOCPU)
|
|
|
|
printf("forward_wakeup: Idle processor not found\n");
|
|
|
|
return (0);
|
|
|
|
}
|
2005-06-09 18:26:31 +00:00
|
|
|
|
|
|
|
static void
|
2008-07-28 15:52:02 +00:00
|
|
|
kick_other_cpu(int pri, int cpuid)
|
|
|
|
{
|
|
|
|
struct pcpu *pcpu;
|
|
|
|
int cpri;
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
pcpu = pcpu_find(cpuid);
|
2011-06-13 13:28:31 +00:00
|
|
|
if (CPU_ISSET(cpuid, &idle_cpus_mask)) {
|
2005-06-09 18:26:31 +00:00
|
|
|
forward_wakeups_delivered++;
|
2010-09-11 07:08:22 +00:00
|
|
|
if (!cpu_idle_wakeup(cpuid))
|
|
|
|
ipi_cpu(cpuid, IPI_AST);
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
cpri = pcpu->pc_curthread->td_priority;
|
2005-06-09 18:26:31 +00:00
|
|
|
if (pri >= cpri)
|
|
|
|
return;
|
|
|
|
|
|
|
|
#if defined(IPI_PREEMPTION) && defined(PREEMPTION)
|
|
|
|
#if !defined(FULL_PREEMPTION)
|
|
|
|
if (pri <= PRI_MAX_ITHD)
|
|
|
|
#endif /* ! FULL_PREEMPTION */
|
|
|
|
{
|
2010-08-06 15:36:59 +00:00
|
|
|
ipi_cpu(cpuid, IPI_PREEMPT);
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* defined(IPI_PREEMPTION) && defined(PREEMPTION) */
|
|
|
|
|
|
|
|
pcpu->pc_curthread->td_flags |= TDF_NEEDRESCHED;
|
2010-08-06 15:36:59 +00:00
|
|
|
ipi_cpu(cpuid, IPI_AST);
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* SMP */
|
|
|
|
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
static int
|
|
|
|
sched_pickcpu(struct thread *td)
|
|
|
|
{
|
|
|
|
int best, cpu;
|
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
|
2008-07-28 20:39:21 +00:00
|
|
|
if (THREAD_CAN_SCHED(td, td->td_lastcpu))
|
|
|
|
best = td->td_lastcpu;
|
|
|
|
else
|
|
|
|
best = NOCPU;
|
2010-06-11 18:46:34 +00:00
|
|
|
CPU_FOREACH(cpu) {
|
2008-07-28 17:25:24 +00:00
|
|
|
if (!THREAD_CAN_SCHED(td, cpu))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (best == NOCPU)
|
|
|
|
best = cpu;
|
|
|
|
else if (runq_length[cpu] < runq_length[best])
|
|
|
|
best = cpu;
|
|
|
|
}
|
|
|
|
KASSERT(best != NOCPU, ("no valid CPUs"));
|
|
|
|
|
|
|
|
return (best);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2004-09-01 02:11:28 +00:00
|
|
|
sched_add(struct thread *td, int flags)
|
2005-06-09 18:26:31 +00:00
|
|
|
#ifdef SMP
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2011-06-13 13:28:31 +00:00
|
|
|
cpuset_t tidlemsk;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2011-06-13 13:28:31 +00:00
|
|
|
u_int cpu, cpuid;
|
2004-09-01 06:42:02 +00:00
|
|
|
int forwarded = 0;
|
2005-06-09 18:26:31 +00:00
|
|
|
int single_cpu = 0;
|
2003-10-16 08:39:15 +00:00
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-01-23 08:46:51 +00:00
|
|
|
KASSERT((td->td_inhibitors == 0),
|
|
|
|
("sched_add: trying to run inhibited thread"));
|
|
|
|
KASSERT((TD_CAN_RUN(td) || TD_IS_RUNNING(td)),
|
|
|
|
("sched_add: bad thread state"));
|
2007-09-17 05:31:39 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
|
|
|
("sched_add: thread swapped out"));
|
2009-01-17 07:17:57 +00:00
|
|
|
|
|
|
|
KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq add",
|
|
|
|
"prio:%d", td->td_priority, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
|
|
|
KTR_POINT1(KTR_SCHED, "thread", sched_tdname(curthread), "wokeup",
|
|
|
|
KTR_ATTR_LINKED, sched_tdname(td));
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE4(sched, , , enqueue, td, td->td_proc, NULL,
|
|
|
|
flags & SRQ_PREEMPTED);
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
/*
|
|
|
|
* Now that the thread is moving to the run-queue, set the lock
|
|
|
|
* to the scheduler's lock.
|
|
|
|
*/
|
|
|
|
if (td->td_lock != &sched_lock) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
thread_lock_set(td, &sched_lock);
|
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
TD_SET_RUNQ(td);
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2011-04-26 20:34:30 +00:00
|
|
|
/*
|
|
|
|
* If SMP is started and the thread is pinned or otherwise limited to
|
|
|
|
* a specific set of CPUs, queue the thread to a per-CPU run queue.
|
|
|
|
* Otherwise, queue the thread to the global run queue.
|
|
|
|
*
|
|
|
|
* If SMP has not yet been started we must use the global run queue
|
|
|
|
* as per-CPU state may not be initialized yet and we may crash if we
|
|
|
|
* try to access the per-CPU run queues.
|
|
|
|
*/
|
|
|
|
if (smp_started && (td->td_pinned != 0 || td->td_flags & TDF_BOUND ||
|
|
|
|
ts->ts_flags & TSF_AFFINITY)) {
|
|
|
|
if (td->td_pinned != 0)
|
|
|
|
cpu = td->td_lastcpu;
|
|
|
|
else if (td->td_flags & TDF_BOUND) {
|
|
|
|
/* Find CPU from bound runq. */
|
|
|
|
KASSERT(SKE_RUNQ_PCPU(ts),
|
|
|
|
("sched_add: bound td_sched not on cpu runq"));
|
|
|
|
cpu = ts->ts_runq - &runq_pcpu[0];
|
|
|
|
} else
|
|
|
|
/* Find a valid CPU for our cpuset */
|
|
|
|
cpu = sched_pickcpu(td);
|
2008-07-28 17:25:24 +00:00
|
|
|
ts->ts_runq = &runq_pcpu[cpu];
|
|
|
|
single_cpu = 1;
|
|
|
|
CTR3(KTR_RUNQ,
|
|
|
|
"sched_add: Put td_sched:%p(td:%p) on cpu%d runq", ts, td,
|
|
|
|
cpu);
|
2008-07-28 15:52:02 +00:00
|
|
|
} else {
|
2004-09-01 06:42:02 +00:00
|
|
|
CTR2(KTR_RUNQ,
|
2008-07-28 15:52:02 +00:00
|
|
|
"sched_add: adding td_sched:%p (td:%p) to gbl runq", ts,
|
|
|
|
td);
|
2004-09-01 06:42:02 +00:00
|
|
|
cpu = NOCPU;
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_runq = &runq;
|
2005-06-09 18:26:31 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2011-06-13 13:28:31 +00:00
|
|
|
cpuid = PCPU_GET(cpuid);
|
|
|
|
if (single_cpu && cpu != cpuid) {
|
2008-07-28 15:52:02 +00:00
|
|
|
kick_other_cpu(td->td_priority, cpu);
|
2004-01-25 08:00:04 +00:00
|
|
|
} else {
|
2005-06-09 19:43:08 +00:00
|
|
|
if (!single_cpu) {
|
2011-06-13 13:28:31 +00:00
|
|
|
tidlemsk = idle_cpus_mask;
|
|
|
|
CPU_NAND(&tidlemsk, &hlt_cpus_mask);
|
|
|
|
CPU_CLR(cpuid, &tidlemsk);
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2011-06-13 13:28:31 +00:00
|
|
|
if (!CPU_ISSET(cpuid, &idle_cpus_mask) &&
|
|
|
|
((flags & SRQ_INTR) == 0) &&
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
!CPU_EMPTY(&tidlemsk))
|
2005-06-09 18:26:31 +00:00
|
|
|
forwarded = forward_wakeup(cpu);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!forwarded) {
|
2005-06-09 19:43:08 +00:00
|
|
|
if ((flags & SRQ_YIELDING) == 0 && maybe_preempt(td))
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
else
|
|
|
|
maybe_resched(td);
|
|
|
|
}
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
2005-06-09 18:26:31 +00:00
|
|
|
sched_load_add();
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_add(ts->ts_runq, td, flags);
|
2008-07-28 17:25:24 +00:00
|
|
|
if (cpu != NOCPU)
|
|
|
|
runq_length[cpu]++;
|
2005-06-09 18:26:31 +00:00
|
|
|
}
|
|
|
|
#else /* SMP */
|
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2008-07-28 17:25:24 +00:00
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-01-23 08:46:51 +00:00
|
|
|
KASSERT((td->td_inhibitors == 0),
|
|
|
|
("sched_add: trying to run inhibited thread"));
|
|
|
|
KASSERT((TD_CAN_RUN(td) || TD_IS_RUNNING(td)),
|
|
|
|
("sched_add: bad thread state"));
|
2007-09-17 05:31:39 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
|
|
|
("sched_add: thread swapped out"));
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq add",
|
|
|
|
"prio:%d", td->td_priority, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
|
|
|
KTR_POINT1(KTR_SCHED, "thread", sched_tdname(curthread), "wokeup",
|
|
|
|
KTR_ATTR_LINKED, sched_tdname(td));
|
2012-05-15 10:58:17 +00:00
|
|
|
SDT_PROBE4(sched, , , enqueue, td, td->td_proc, NULL,
|
2012-05-15 01:30:25 +00:00
|
|
|
flags & SRQ_PREEMPTED);
|
2008-07-28 15:52:02 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
/*
|
|
|
|
* Now that the thread is moving to the run-queue, set the lock
|
|
|
|
* to the scheduler's lock.
|
|
|
|
*/
|
|
|
|
if (td->td_lock != &sched_lock) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
thread_lock_set(td, &sched_lock);
|
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
TD_SET_RUNQ(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
CTR2(KTR_RUNQ, "sched_add: adding td_sched:%p (td:%p) to runq", ts, td);
|
|
|
|
ts->ts_runq = &runq;
|
2004-09-01 06:42:02 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
/*
|
|
|
|
* If we are yielding (on the way out anyhow) or the thread
|
|
|
|
* being saved is US, then don't try be smart about preemption
|
|
|
|
* or kicking off another CPU as it won't help and may hinder.
|
|
|
|
* In the YIEDLING case, we are about to run whoever is being
|
|
|
|
* put in the queue anyhow, and in the OURSELF case, we are
|
|
|
|
* puting ourself on the run queue which also only happens
|
|
|
|
* when we are about to yield.
|
2004-09-01 06:42:02 +00:00
|
|
|
*/
|
2008-07-28 15:52:02 +00:00
|
|
|
if ((flags & SRQ_YIELDING) == 0) {
|
2005-06-09 18:26:31 +00:00
|
|
|
if (maybe_preempt(td))
|
|
|
|
return;
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_add(ts->ts_runq, td, flags);
|
2004-07-13 20:49:13 +00:00
|
|
|
maybe_resched(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2005-06-09 18:26:31 +00:00
|
|
|
#endif /* SMP */
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_rem(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2003-10-16 08:39:15 +00:00
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
2007-09-17 05:31:39 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
|
|
|
("sched_rem: thread swapped out"));
|
2007-01-23 08:46:51 +00:00
|
|
|
KASSERT(TD_ON_RUNQ(td),
|
2006-12-06 06:34:57 +00:00
|
|
|
("sched_rem: thread not on run queue"));
|
2002-10-12 05:32:24 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq rem",
|
|
|
|
"prio:%d", td->td_priority, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE3(sched, , , dequeue, td, td->td_proc, NULL);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
if (ts->ts_runq != &runq)
|
|
|
|
runq_length[ts->ts_runq - runq_pcpu]--;
|
|
|
|
#endif
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_remove(ts->ts_runq, td);
|
2007-01-23 08:46:51 +00:00
|
|
|
TD_SET_CAN_RUN(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2004-09-16 07:12:59 +00:00
|
|
|
/*
|
2008-07-28 15:52:02 +00:00
|
|
|
* Select threads to run. Note that running threads still consume a
|
|
|
|
* slot.
|
2004-09-16 07:12:59 +00:00
|
|
|
*/
|
2007-01-23 08:46:51 +00:00
|
|
|
struct thread *
|
2002-10-12 05:32:24 +00:00
|
|
|
sched_choose(void)
|
|
|
|
{
|
2008-03-20 05:51:16 +00:00
|
|
|
struct thread *td;
|
2004-01-25 08:00:04 +00:00
|
|
|
struct runq *rq;
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
2008-03-20 05:51:16 +00:00
|
|
|
struct thread *tdcpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
rq = &runq;
|
2008-03-20 05:51:16 +00:00
|
|
|
td = runq_choose_fuzz(&runq, runq_fuzz);
|
|
|
|
tdcpu = runq_choose(&runq_pcpu[PCPU_GET(cpuid)]);
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
if (td == NULL ||
|
|
|
|
(tdcpu != NULL &&
|
2008-03-20 05:51:16 +00:00
|
|
|
tdcpu->td_priority < td->td_priority)) {
|
|
|
|
CTR2(KTR_RUNQ, "choosing td %p from pcpu runq %d", tdcpu,
|
2004-01-25 08:00:04 +00:00
|
|
|
PCPU_GET(cpuid));
|
2008-03-20 05:51:16 +00:00
|
|
|
td = tdcpu;
|
2004-01-25 08:00:04 +00:00
|
|
|
rq = &runq_pcpu[PCPU_GET(cpuid)];
|
2008-07-28 15:52:02 +00:00
|
|
|
} else {
|
2008-03-20 05:51:16 +00:00
|
|
|
CTR1(KTR_RUNQ, "choosing td_sched %p from main runq", td);
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
rq = &runq;
|
2008-03-20 05:51:16 +00:00
|
|
|
td = runq_choose(&runq);
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2008-03-20 05:51:16 +00:00
|
|
|
if (td) {
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
if (td == tdcpu)
|
|
|
|
runq_length[PCPU_GET(cpuid)]--;
|
|
|
|
#endif
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_remove(rq, td);
|
|
|
|
td->td_flags |= TDF_DIDRUN;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2008-03-20 05:51:16 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
2007-09-17 05:31:39 +00:00
|
|
|
("sched_choose: thread swapped out"));
|
2008-03-20 05:51:16 +00:00
|
|
|
return (td);
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
return (PCPU_GET(idlethread));
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2008-03-10 01:30:35 +00:00
|
|
|
void
|
|
|
|
sched_preempt(struct thread *td)
|
|
|
|
{
|
2012-05-15 01:30:25 +00:00
|
|
|
|
|
|
|
SDT_PROBE2(sched, , , surrender, td, td->td_proc);
|
2008-03-10 01:30:35 +00:00
|
|
|
thread_lock(td);
|
|
|
|
if (td->td_critnest > 1)
|
|
|
|
td->td_owepreempt = 1;
|
|
|
|
else
|
2008-04-17 04:20:10 +00:00
|
|
|
mi_switch(SW_INVOL | SW_PREEMPT | SWT_PREEMPT, NULL);
|
2008-03-10 01:30:35 +00:00
|
|
|
thread_unlock(td);
|
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
|
|
|
sched_userret(struct thread *td)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* XXX we cheat slightly on the locking here to avoid locking in
|
|
|
|
* the usual case. Setting td_priority here is essentially an
|
|
|
|
* incomplete workaround for not setting it properly elsewhere.
|
|
|
|
* Now that some interrupt handlers are threads, not setting it
|
|
|
|
* properly elsewhere can clobber it in the window between setting
|
|
|
|
* it here and returning to user mode, so don't waste time setting
|
|
|
|
* it perfectly here.
|
|
|
|
*/
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
KASSERT((td->td_flags & TDF_BORROWING) == 0,
|
|
|
|
("thread with borrowed priority returning to userland"));
|
2006-10-26 21:42:22 +00:00
|
|
|
if (td->td_priority != td->td_user_pri) {
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_priority = td->td_user_pri;
|
|
|
|
td->td_base_pri = td->td_user_pri;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2002-11-21 01:22:38 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
void
|
|
|
|
sched_bind(struct thread *td, int cpu)
|
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2010-05-21 17:15:56 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED|MA_NOTRECURSED);
|
|
|
|
KASSERT(td == curthread, ("sched_bind: can only bind curthread"));
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags |= TDF_BOUND;
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_runq = &runq_pcpu[cpu];
|
2004-01-25 08:00:04 +00:00
|
|
|
if (PCPU_GET(cpuid) == cpu)
|
|
|
|
return;
|
|
|
|
|
2004-07-02 19:09:50 +00:00
|
|
|
mi_switch(SW_VOL, NULL);
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_unbind(struct thread* td)
|
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2010-05-21 17:15:56 +00:00
|
|
|
KASSERT(td == curthread, ("sched_unbind: can only bind curthread"));
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags &= ~TDF_BOUND;
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
int
|
|
|
|
sched_is_bound(struct thread *td)
|
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2008-03-20 05:51:16 +00:00
|
|
|
return (td->td_flags & TDF_BOUND);
|
2005-04-19 04:01:25 +00:00
|
|
|
}
|
|
|
|
|
2006-06-15 06:37:39 +00:00
|
|
|
void
|
|
|
|
sched_relinquish(struct thread *td)
|
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2008-04-17 04:20:10 +00:00
|
|
|
mi_switch(SW_VOL | SWT_RELINQUISH, NULL);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2006-06-15 06:37:39 +00:00
|
|
|
}
|
|
|
|
|
2004-02-01 02:46:47 +00:00
|
|
|
int
|
|
|
|
sched_load(void)
|
|
|
|
{
|
|
|
|
return (sched_tdcnt);
|
|
|
|
}
|
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_proc(void)
|
|
|
|
{
|
|
|
|
return (sizeof(struct proc));
|
|
|
|
}
|
2006-06-15 06:37:39 +00:00
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_thread(void)
|
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
return (sizeof(struct thread) + sizeof(struct td_sched));
|
2002-11-21 01:22:38 +00:00
|
|
|
}
|
2002-11-21 09:30:55 +00:00
|
|
|
|
|
|
|
fixpt_t
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_pctcpu(struct thread *td)
|
2002-11-21 09:30:55 +00:00
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2003-10-16 21:13:14 +00:00
|
|
|
|
2010-06-03 16:02:11 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2006-12-06 06:34:57 +00:00
|
|
|
ts = td->td_sched;
|
|
|
|
return (ts->ts_pctcpu);
|
2002-11-21 09:30:55 +00:00
|
|
|
}
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
|
2012-10-26 16:01:08 +00:00
|
|
|
#ifdef RACCT
|
|
|
|
/*
|
|
|
|
* Calculates the contribution to the thread cpu usage for the latest
|
|
|
|
* (unfinished) second.
|
|
|
|
*/
|
|
|
|
fixpt_t
|
|
|
|
sched_pctcpu_delta(struct thread *td)
|
|
|
|
{
|
|
|
|
struct td_sched *ts;
|
|
|
|
fixpt_t delta;
|
|
|
|
int realstathz;
|
|
|
|
|
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
|
|
|
ts = td->td_sched;
|
|
|
|
delta = 0;
|
|
|
|
realstathz = stathz ? stathz : hz;
|
|
|
|
if (ts->ts_cpticks != 0) {
|
|
|
|
#if (FSHIFT >= CCPU_SHIFT)
|
|
|
|
delta = (realstathz == 100)
|
|
|
|
? ((fixpt_t) ts->ts_cpticks) <<
|
|
|
|
(FSHIFT - CCPU_SHIFT) :
|
|
|
|
100 * (((fixpt_t) ts->ts_cpticks)
|
|
|
|
<< (FSHIFT - CCPU_SHIFT)) / realstathz;
|
|
|
|
#else
|
|
|
|
delta = ((FSCALE - ccpu) *
|
|
|
|
(ts->ts_cpticks *
|
|
|
|
FSCALE / realstathz)) >> FSHIFT;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
return (delta);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
void
|
Refactor timer management code with priority to one-shot operation mode.
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.
There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.
As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.
Tested by: many (on i386, amd64, sparc64 and powerc)
H/W donated by: Gheorghe Ardelean
Sponsored by: iXsystems, Inc.
2010-09-13 07:25:35 +00:00
|
|
|
sched_tick(int cnt)
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
{
|
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The actual idle process.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_idletd(void *dummy)
|
|
|
|
{
|
2010-09-11 07:08:22 +00:00
|
|
|
struct pcpuidlestat *stat;
|
2007-01-23 08:46:51 +00:00
|
|
|
|
2012-08-22 20:01:38 +00:00
|
|
|
THREAD_NO_SLEEPING();
|
2010-09-11 07:08:22 +00:00
|
|
|
stat = DPCPU_PTR(idlestat);
|
2007-01-23 08:46:51 +00:00
|
|
|
for (;;) {
|
|
|
|
mtx_assert(&Giant, MA_NOTOWNED);
|
|
|
|
|
2010-09-11 07:08:22 +00:00
|
|
|
while (sched_runnable() == 0) {
|
|
|
|
cpu_idle(stat->idlecalls + stat->oldidlecalls > 64);
|
|
|
|
stat->idlecalls++;
|
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
|
|
|
|
mtx_lock_spin(&sched_lock);
|
2008-04-17 04:20:10 +00:00
|
|
|
mi_switch(SW_VOL | SWT_IDLE, NULL);
|
2007-01-23 08:46:51 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
/*
|
|
|
|
* A CPU is entering for the first time or a thread is exiting.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_throw(struct thread *td)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Correct spinlock nesting. The idle thread context that we are
|
|
|
|
* borrowing was created so that it would start out with a single
|
|
|
|
* spin lock (sched_lock) held in fork_trampoline(). Since we've
|
|
|
|
* explicitly acquired locks in this function, the nesting count
|
|
|
|
* is now 2 rather than 1. Since we are nested, calling
|
|
|
|
* spinlock_exit() will simply adjust the counts without allowing
|
|
|
|
* spin lock using code to interrupt us.
|
|
|
|
*/
|
|
|
|
if (td == NULL) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
spinlock_exit();
|
2012-01-03 21:03:28 +00:00
|
|
|
PCPU_SET(switchtime, cpu_ticks());
|
|
|
|
PCPU_SET(switchticks, ticks);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
} else {
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_release_lock(&sched_lock.lock_object);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
MPASS(td->td_lock == &sched_lock);
|
|
|
|
}
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
KASSERT(curthread->td_md.md_spinlock_count == 1, ("invalid count"));
|
|
|
|
cpu_throw(td, choosethread()); /* doesn't return */
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2007-06-12 07:47:09 +00:00
|
|
|
sched_fork_exit(struct thread *td)
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finish setting up thread glue so that it begins execution in a
|
|
|
|
* non-nested critical section with sched_lock held but not recursed.
|
|
|
|
*/
|
2007-06-12 07:47:09 +00:00
|
|
|
td->td_oncpu = PCPU_GET(cpuid);
|
|
|
|
sched_lock.mtx_lock = (uintptr_t)td;
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_obtain_lock_success(&sched_lock.lock_object,
|
|
|
|
0, 0, __FILE__, __LINE__);
|
2007-06-12 07:47:09 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED | MA_NOTRECURSED);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
char *
|
|
|
|
sched_tdname(struct thread *td)
|
|
|
|
{
|
|
|
|
#ifdef KTR
|
|
|
|
struct td_sched *ts;
|
|
|
|
|
|
|
|
ts = td->td_sched;
|
|
|
|
if (ts->ts_name[0] == '\0')
|
|
|
|
snprintf(ts->ts_name, sizeof(ts->ts_name),
|
|
|
|
"%s tid %d", td->td_name, td->td_tid);
|
|
|
|
return (ts->ts_name);
|
|
|
|
#else
|
|
|
|
return (td->td_name);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2012-03-08 19:41:05 +00:00
|
|
|
#ifdef KTR
|
|
|
|
void
|
|
|
|
sched_clear_tdname(struct thread *td)
|
|
|
|
{
|
|
|
|
struct td_sched *ts;
|
|
|
|
|
|
|
|
ts = td->td_sched;
|
|
|
|
ts->ts_name[0] = '\0';
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-03-02 07:19:35 +00:00
|
|
|
void
|
|
|
|
sched_affinity(struct thread *td)
|
|
|
|
{
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
struct td_sched *ts;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the TSF_AFFINITY flag if there is at least one CPU this
|
|
|
|
* thread can't run on.
|
|
|
|
*/
|
|
|
|
ts = td->td_sched;
|
|
|
|
ts->ts_flags &= ~TSF_AFFINITY;
|
2010-06-11 18:46:34 +00:00
|
|
|
CPU_FOREACH(cpu) {
|
2008-07-28 17:25:24 +00:00
|
|
|
if (!THREAD_CAN_SCHED(td, cpu)) {
|
|
|
|
ts->ts_flags |= TSF_AFFINITY;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this thread can run on all CPUs, nothing else to do.
|
|
|
|
*/
|
|
|
|
if (!(ts->ts_flags & TSF_AFFINITY))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Pinned threads and bound threads should be left alone. */
|
|
|
|
if (td->td_pinned != 0 || td->td_flags & TDF_BOUND)
|
|
|
|
return;
|
|
|
|
|
|
|
|
switch (td->td_state) {
|
|
|
|
case TDS_RUNQ:
|
|
|
|
/*
|
|
|
|
* If we are on a per-CPU runqueue that is in the set,
|
|
|
|
* then nothing needs to be done.
|
|
|
|
*/
|
|
|
|
if (ts->ts_runq != &runq &&
|
|
|
|
THREAD_CAN_SCHED(td, ts->ts_runq - runq_pcpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Put this thread on a valid per-CPU runqueue. */
|
|
|
|
sched_rem(td);
|
|
|
|
sched_add(td, SRQ_BORING);
|
|
|
|
break;
|
|
|
|
case TDS_RUNNING:
|
|
|
|
/*
|
|
|
|
* See if our current CPU is in the set. If not, force a
|
|
|
|
* context switch.
|
|
|
|
*/
|
|
|
|
if (THREAD_CAN_SCHED(td, td->td_oncpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
td->td_flags |= TDF_NEEDRESCHED;
|
|
|
|
if (td != curthread)
|
2010-08-06 15:36:59 +00:00
|
|
|
ipi_cpu(cpu, IPI_AST);
|
2008-07-28 17:25:24 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
#endif
|
2008-03-02 07:19:35 +00:00
|
|
|
}
|